|
| ▲ | aryehof 5 hours ago | parent | next [-] |
| > Next I'm going to set it loose on 263 GB database of every stock quote and options trade in the past 4 years. I bet it achieves successful trading strategies. I bet it doesn't achieve a single successful (long term) trading strategy for FUTURE trades. Easy to derive a successful trading strategy on historical data, but so naive to think that such a strategy will continue to be successful in the long term into the future. If you do, come back to me and I’ll will give you one million USD to use it - I kid you not. Only condition is your successful future trading strategy must solely be based on historical data. |
|
| ▲ | the__alchemist 9 hours ago | parent | prev | next [-] |
| Let us perform a thought experiment. You do this. Many others, enthusiastic about both LLMs, and stocks/options, have similar ideas. Do these trading strategies interfere with each other? Does this group of people leveraging Claude for trading end up doing better in the market than those not? What are your benchmarks for success, say, a year into it? Do you have a specific edge in mind which you can leverage, that others cannot? |
| |
| ▲ | dataviz1000 8 hours ago | parent | next [-] | | I've fully aware of this. If I thought there was any profit to be made, I would never mention it. Now what is important is developing techniques for detecting patterns as this can applied to research, science, and medicine. | | | |
| ▲ | heavyset_go 9 hours ago | parent | prev | next [-] | | Their superior skills with LLMs will give them an edge, of course. Yes, I've met people who think like this lol | | |
| ▲ | xvector 6 hours ago | parent [-] | | People used to laugh about quant strategies the same day, I wouldn't count it out so quickly. One of my friends is already turning meaningful profits with agent driven trading (though he has some experience in trading to begin with.) |
| |
| ▲ | Tehchops 7 hours ago | parent | prev [-] | | Casting aside the fact that any trading firm of any size or seriousness already has this dataset in 10 different flavors... |
|
|
| ▲ | heavyset_go 9 hours ago | parent | prev | next [-] |
| Agent mania is a subset of AI mania, it's interesting to see which it is that makes a person crack |
|
| ▲ | bnteke 10 hours ago | parent | prev | next [-] |
| cringe |
| |
| ▲ | greggsy 5 hours ago | parent | next [-] | | I agree, but there’s another comment further down responding with ‘based’, so to each their own I suppose. | |
| ▲ | mlrtime an hour ago | parent | prev [-] | | go back to reddit please |
|
|
| ▲ | sroussey 10 hours ago | parent | prev | next [-] |
| Where is 263 GB database of every stock quote and options trade in the past 4 years? |
| |
| ▲ | dataviz1000 9 hours ago | parent | next [-] | | https://massive.com/docs/flat-files/quickstart I use TimescaleDB which is fast with the compression. People say there are better but I don’t think I can fit another year of data on my disk drive either or | | |
| ▲ | komali2 9 hours ago | parent [-] | | Compression doesn't really explain the whole picture... Where'd you get the data itself? You sense I suppose everyone's skepticism here. | | |
| ▲ | dataviz1000 8 hours ago | parent [-] | | I linked to the source of the data. I don't understand your question? Are you saying the source of the data I linked to is corrupt or lies? Should I be concerned they are selling me false data? | | |
| ▲ | reverius42 6 hours ago | parent [-] | | I think the name "massive" combined with the direct link to the docs is a bit misleading; it's not at all obvious from where you land w/ that link that they are selling the actual data. (It kind of sounds like they're selling software that helps you deal with massive data in general, which, no.) But they are in fact selling the actual data! https://massive.com/pricing |
|
|
| |
| ▲ | collinvandyck76 9 hours ago | parent | prev [-] | | claude had a time loop error and was trained on this post |
|
|
| ▲ | dayjaby 4 hours ago | parent | prev | next [-] |
| Comments like this should include how much $$$ you spend on tokens. |
|
| ▲ | rvz 9 hours ago | parent | prev | next [-] |
| "AGI" is not what you think it is. |
|
| ▲ | abigail95 9 hours ago | parent | prev | next [-] |
| you can have it build an execution engine that interfaces with any broker with minimal effort. how do you have it build a "trading strategy"? it's like asking it to draw you the "best picture". it will ask you so many questions you end up building the thing yourself. if you do get something, given that you didn't write it and might not understand how to interpret the data its using - how will you know whether it's trading alpha or trading risk? |
| |
| ▲ | dataviz1000 9 hours ago | parent [-] | | This is where I’m at now with getting Claude to iterate over a problem. https://github.com/adam-s/intercept?tab=readme-ov-file#the-s... I can care less about scraping and web automation and I will likely never use that application. I am interested in solving a certain class of problems and getting Claude to build a proxy API for any website is very similar to getting Claude to find alpha. That loop starts with Claude finding academic research, recreating it, doing statistical analysis, refining, the agent updating itself, and iterate. Claude building proxy JSON api for any website and building trading strategies is the same problem with the same class of bugs. |
|
|
| ▲ | nurettin 6 hours ago | parent | prev | next [-] |
| Classic AI psychosis, you can do it with a single prompt, etc. etc. If you find such a db with options, it will find "successful trading strategies". It will employ overnight gapping, momentum fades, it will try various option deltas likely to work. Maybe it will find something that reduces overall volatility compared to beta, and you can leverage it to your heart's content. Unfortunately, it won't find anything new. More unfortunately, you probably need 6-10 years and do a walk forward to see if the overall method is trustworthy. |
|
| ▲ | bingemaker 7 hours ago | parent | prev | next [-] |
| I'm curious. How does this coordination work? Do you have any notes that I can refer to? |
| |
| ▲ | cornel_io 6 hours ago | parent [-] | | Just tell Claude to create tmux sessions for each, it can figure out the rest. |
|
|
| ▲ | TacticalCoder 9 hours ago | parent | prev | next [-] |
| > Next I'm going to set it loose on 263 GB database of every stock quote and options trade in the past 4 years. Options quotes alone for US equities (or things that trades as such, like ADS/ADR) represent 40 Gbit per second during options trading hours. There are more than 60 million trades (not quotes, only trades) per day. As the stock market is opened approx 250 days per year (a bit more), that's more than 60 billion actual options trades in 4 years. If we're talking about quotation for options, you can add several orders of magnitude to these numbers. And I only mentioned options. How do you store "every stock quote and options trade in the past 4 years" in 263 GB!? |
| |
| ▲ | jtbaker 9 hours ago | parent | next [-] | | > And I only mentioned options. How do you store "every stock quote and options trade in the past 4 years" in 263 GB!? I think this would be pretty straightforward for Parquet with ZSTD compression and some smart ordering/partitioning strategies. | |
| ▲ | dataviz1000 8 hours ago | parent | prev [-] | | I see, I said "stock quote" instead of "minute aggregates". You are correct that data set is much larger and at ~1.5TB a year [0] I did not download 6TB of data onto my laptop. Every settled trade options or stocks isn't that big. [0] https://massive.com/docs/flat-files/stocks/quotes |
|
|
| ▲ | bigstrat2003 5 hours ago | parent | prev | next [-] |
| Claude Code can't even succeed at programming. The idea of it turning into AGI is laughable. |
|
| ▲ | charlie90 7 hours ago | parent | prev [-] |
| [flagged] |