Keyboard Shortcuts
Likes
- Twsapi
- Messages
Search
Live trading account did not sent PRE_SUBMITTED and instead sent SUBMITTED for non triggered stop limit orders as opposed to Paper Trading Account which sent PRE-SUBMITTED for non triggered orders
Hi Respected Members of the Forum,
When I tested in Paper trading account I used to get PRE_SUBMITTED?status until order is triggered and then SUBMITTED after trigger. Based on this logic we either cancel the order if it stays in order book or we get FILLED . But today we migrated to LIVE trading account based on this logic and to my surprise I got SUBMITTED state as First response even when the order is not triggered? which breaks the logic to distinguish when the SL is triggered or not So is there any configuration change which I am missing to get PRE_SUBMIITED order state? or this is how it works in LIVE trading account. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Re: How can I know when a stop limit order is triggered?
Hi Levente,
In Live trading account when i place a stop loss order i get response as submitted even when the order is not triggered . And when i tested this in Paper trading account i used to get pre-submitted for orders that are not triggered, what am i missing here? is there any configuration change requuired? |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Re: How can I know when a stop limit order is triggered?
Hi ,
In live trading when i place stop limit order i get submitted state as first response even when the order is not triggered while in paper trading account i used to get pre-submitted(until triggered) -> then? Submitted) what am i missing , is there any configuration that needs to be done? |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Re: Timestamp is missing milliseconds in tickByTickAllLast
开云体育闯ü谤驳别苍 ? Sorry, I read more into your use of timestamps than was actually there! ? I also use high-res timestamps in my platform. Windows provides timestamps in units of 100 nanoseconds, and every tick received is recorded with its own timestamp, which stays with it whether the data is recorded to file, or to a database, or simply passed to a client application. ? In my platform, all knowledge of the TWS API is confined to two components: one is my own implementation of the TWSAPI, essentially a replacement for the original ActiveX implementation. The other is a wrapper that maps the TWS API concepts to and from the concepts employed by my platform, which is designed to be broker-independent, using configurable service providers to provide access to data, orders, contracts etc from different sources. The thirty-plus other components know nothing about the TWS API – they work entirely within the platform’s own conceptual framework. Consequently the source of the data is completely hidden from the applications: they can be working against live data from IB or recorded data in a text file or an SQL database. ? This structure enables me, for example, to run a trading client that is playing back multiple streams of recorded data, simultaneously, with all events correctly timed, and to display charts, place simulated orders etc. This is a great tool for learning trading skills, rather like using IB’s paper trading system but with historical data, so it can be done at any time. I got this working in its multi-stream mode back in 2013, and I felt very pleased with myself when it was done, but actually I’ve hardly ever used it since then… (The basic single tick stream replay has been there right from the start in 2005.) ? The tick stream playback can also be run at full speed (rather than using the recorded inter-tick intervals) and this is used for testing trading strategies on large sets of historical data. Once again, a trading strategy isn’t aware of whether it’s running against live real-time data or processing historical data that is being replayed. ? I too have quite a sophisticated logging system, which took inspiration from java.util.logging but can log any type of data (which makes it more of a data distribution mechanism rather than just traditional text logging). The same timestamping is used for log events as for market data (and everything else that needs a timestamp). I don’t go to the same extremes of logging as you do – I don’t bother logging all API calls for example – as I tend to use logging mainly for diagnostic purposes, and I’ll frequently add new log events if I hit a tricky-to-solve bug. And I don’t make a lot of use of the ability to log non-text data, but it’s a nice capability to have on occasion (I tend to use a more targeted and efficient listener mechanism for such purposes). ? I had never heard of Six Sigma before you mentioned it. I certainly agree with the idea of measuring and recording as much data as you can, but when I started on my trading platform back in 2005 (after an initial prototype in 2003) one of the biggest constraints I faced was disk space. It seems almost absurd now, but the biggest disks I could afford then for my Dell PowerEdge 2400 server, with its two Pentium III processors running at 1.3GHz, were 18GB. Given the amount of market data I was collecting in both text files, and SQL Server, and MySQL this meant I had to do a number of things to minimise the likelihood of running out of disk space, and not doing unnecessary logging was one of them. Having said that, I still keep the log files created by my live and simulated trading strategies during those years, and they have occasionally been useful (mostly as a reminder of how rickety the IB API was back then!...). Now I have about a thousand times as much space, though everything is mirrored so it’s more like 500 times as much, but it’s still an issue (time to buy a new server really, but no funds!). ? (By the way, in case you’re wondering, note that I don’t use any of my platform components in the Contract Inspector – that’s a pure TWS API application. I certainly toy with the idea of switching it to use my platform, which would give many benefits, but there are quite a few reasons why I probably won’t.) ? Richard ? ? From: [email protected] <[email protected]> On Behalf Of 闯ü谤驳别苍 Reinold via groups.io
Sent: 05 August 2022 09:21 To: [email protected] Subject: Re: [TWS API] Timestamp is missing milliseconds in tickByTickAllLast ? We are not correcting tick timestamps, Richard. We simply have exact arrival timestamps (Java class Instant) for all kinds of events, including for when TWS API callbacks take place. More background on that below. These high-resolution time stamps are not used for any trade decision making. But we do occasionally use them for response time and latency monitoring such as estimating Tick-by-Tick data streams event delays. In other words, how much times does it take for the information about a trade or order book update at the exchange to arrive in our client in the form of Tick-By-Tick callbacks? When you look at the IBKR one-second resolution time stamp of a Tick-by-Tick object you generally don’t know whether that event took place towards the start of that second (say a few micro seconds into the second) or towards the end (nearly 1,000ms later). But for VIX and ES futures, for example, the first data we receive after market open at 17:00 Central and the start of the liquid trading hours at 08:30 Central almost certainly took place at the exchange very close to 17:00:00.00000000 and 08:30:00.00000000. Therefore, the difference between that timestamp and our high resolution Tick-by-Tick callback arrival timestamp gives us a good feel for the order of magnitude for that delay. For our setup the median delay (over a a few years) is 27ms. And while that is an eternity for a HFT system, for what we are doing it means that we have a very timely real-time view of what is happening at the exchange when we monitor Tick-By-Tick data streams. So, why do we even have these high resolution timestamps. The fancy term would probably be “orthodoxies”, or in plain words “old dogs, their habits, and their tricks”. You may recall that my background is not in finance or trading but rather in computer systems architecture (general purpose, fault tolerant, embedded, distributed), operating kernel internals (Unix, Linux, WindowsNT), and large scale real-time and signal processing systems. So when I stumbled over the TWS API and started exploring what I could use that for, I had a rich set of tools, libraries, and frameworks at my disposal to base our framework on. And these were all oriented around streams, events, messages, reactive systems, and asynchronous processing. The TWS API has shortcomings, but the asynchronous and non-blocking nature fits right into what I had. And then there was this other blast from the past, called Six Sigma, that encourages you to measure and record everything you can afford to. You may not know exactly what to do with the data right away, but there will be questions in the future where that extra data and context suddenly becomes crucial. So for us, everything is an event, events lead to streams (or flows) and streams can be distributed, shared, persisted, split, combined, filtered, aggregated, processed sequentially or in parallel, and manipulated in all kinds of creative ways. Whether that is a Tick-by-Tick data object or an execution report we receive from IBKR or a trading decision, an order placement or anything else. That view of the world is greatly supported by a development practice we had acquired over the years that favors smaller classes where virtually all instantiated objects are immutable (e.g. all fields are “final” and no object state changes after construction). These immutable objects can now freely enter into parallel streams and can be processed without any risk of undesirable side effects. For the most part, making an application consists of orchestrating the kinds of event input streams the application needs, the ones it generates, and the various required processing steps. When an object turns into an event it gets a high-resolution timestamp (the Java Instant) as well as a unique identifier that allows us to remember the temporal order of these events even if they take different paths through the streams, are aggregated, or come back together later. And the best part of this system is an EventLogger that can consume any and all of these streams, serialize events and related objects into Json streams (even if it has no idea what these objects are) and persist them to files. The logger rolls and compresses these files every 15 minutes (configurable) so that we have compact records of everything that took place during those times. We can go back to these records later for analysis or, due to the high resolution time stamps, combine them with other data or events. We can also replay these logs with extreme fidelity in case we make changes or want to try different configurations. For example, our trading strategies are not aware of whether they actually communicate over the TWS API or whether they operate on stream replays. or in simulations. And finally, the EventLogger has eliminated most of the traditional application logging. Application code (or stream processing modules) generate events that are logged along with all other events and objects and, in case of the need for debugging, a rich set of hints and real objects exist to go back to. So when the Tick-by-Tick data delay question came up, we had enough “context” recorded over long periods of times to quickly come up with a good estimate. Hope that helps and makes sense, 闯ü谤驳别苍 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Historical Market Data Pacing and Time of Day
I have built a type of stock scanner that uses the basic scanner API to pull stocks that have gapped by a certain percentage (plus some other criteria). I load a "queue" with the results as they come in and then I have a separate thread that pulls items off the queue and then downloads contract details and historical data to built a daily ATR(14) and a RVOL (cumulative). Once that is done it starts a market data stream so I can keep the gap% and the RVOL (cum) updated.
My RVOL (cum) needs to get 5 days worth of historical bars (initially I used 1min but then tried 5min) however I am facing some problems depending on the time of day. (NOTE: I load the 1min or 5min data 1 day at a time.) The problem: ?- during the pre-market (i.e. 4am->9.30am EST) the software runs perfectly and there is no issue. Typically I may have 10 or 15 stocks in my scanner list at this point. ?- at 9.30 I start to see more symbols being added. The first 4 or 5 are added with no problem at all, then I start to see my software significantly slow down. The next 1 or 2 symbols can take 30secs or 45secs to load, then finally I get a timeout on the next symbols. The timeout is generated by my software which will wait on the thread for max 1 minute before unblocking, canceling the market data request and reporting an error. After that almost every symbol in the "queue" faces a timeout issue. In some cases my request to get Contract Details also times out. ?- There is no difference to this problem whether I use 1min or 5min bars. If I clear my scanner and restart it after 9.30am I will only get 4-5 stocks added quickly then I start to face the problems mentioned above with the slowdown then final timeout problem. I can start my scanner after market hours when there may be 30 or 40 stocks in the scanner and it will load them all very quickly, again with no problems. Before I start to spend hours and hours debugging this issue, could anyone tell me if they have faced the same problem with IB's historical market data with respect to time of day? Does anyone have any advice to workaround this if it is really a pacing issue? I am using TWS API v10.16.01 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Delphi IABSocketAPI components v10.16, now available
Happy to report that the IABSocketAPI is now at the current version 10.16.?
Updates in TWS features set for v10.16 are rather minor, but our API was updated to support IPv6, and can connect to the TWS / Gateway and any IP4 or IPv6 host address. Our IABSocketAPI has all the latest functions and data definitions of the latest 10.16 TWS API.?? Compiles in all XE and v10 and new v11 Delphi's.? Also possible to be used in BCB. Please see:? |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Re: Timestamp is missing milliseconds in tickByTickAllLast
We are not correcting tick timestamps, Richard. We simply have exact arrival timestamps (Java class Instant) for all kinds of events, including for when TWS API callbacks take place. More background on that below. These high-resolution time stamps are not used for any trade decision making. But we do occasionally use them for response time and latency monitoring such as estimating Tick-by-Tick data streams event delays. In other words, how much times does it take for the information about a trade or order book update at the exchange to arrive in our client in the form of Tick-By-Tick callbacks? When you look at the IBKR one-second resolution time stamp of a Tick-by-Tick object you generally don’t know whether that event took place towards the start of that second (say a few micro seconds into the second) or towards the end (nearly 1,000ms later). But for VIX and ES futures, for example, the first data we receive after market open at 17:00 Central and the start of the liquid trading hours at 08:30 Central almost certainly took place at the exchange very close to 17:00:00.00000000 and 08:30:00.00000000. Therefore, the difference between that timestamp and our high resolution Tick-by-Tick callback arrival timestamp gives us a good feel for the order of magnitude for that delay. For our setup the median delay (over a a few years) is 27ms. And while that is an eternity for a HFT system, for what we are doing it means that we have a very timely real-time view of what is happening at the exchange when we monitor Tick-By-Tick data streams. So, why do we even have these high resolution timestamps. The fancy term would probably be “orthodoxies”, or in plain words “old dogs, their habits, and their tricks”. You may recall that my background is not in finance or trading but rather in computer systems architecture (general purpose, fault tolerant, embedded, distributed), operating kernel internals (Unix, Linux, WindowsNT), and large scale real-time and signal processing systems. So when I stumbled over the TWS API and started exploring what I could use that for, I had a rich set of tools, libraries, and frameworks at my disposal to base our framework on. And these were all oriented around streams, events, messages, reactive systems, and asynchronous processing. The TWS API has shortcomings, but the asynchronous and non-blocking nature fits right into what I had. And then there was this other blast from the past, called Six Sigma, that encourages you to measure and record everything you can afford to. You may not know exactly what to do with the data right away, but there will be questions in the future where that extra data and context suddenly becomes crucial. So for us, everything is an event, events lead to streams (or flows) and streams can be distributed, shared, persisted, split, combined, filtered, aggregated, processed sequentially or in parallel, and manipulated in all kinds of creative ways. Whether that is a Tick-by-Tick data object or an execution report we receive from IBKR or a trading decision, an order placement or anything else. That view of the world is greatly supported by a development practice we had acquired over the years that favors smaller classes where virtually all instantiated objects are immutable (e.g. all fields are “final” and no object state changes after construction). These immutable objects can now freely enter into parallel streams and can be processed without any risk of undesirable side effects. For the most part, making an application consists of orchestrating the kinds of event input streams the application needs, the ones it generates, and the various required processing steps. When an object turns into an event it gets a high-resolution timestamp (the Java Instant) as well as a unique identifier that allows us to remember the temporal order of these events even if they take different paths through the streams, are aggregated, or come back together later. And the best part of this system is an EventLogger that can consume any and all of these streams, serialize events and related objects into Json streams (even if it has no idea what these objects are) and persist them to files. The logger rolls and compresses these files every 15 minutes (configurable) so that we have compact records of everything that took place during those times. We can go back to these records later for analysis or, due to the high resolution time stamps, combine them with other data or events. We can also replay these logs with extreme fidelity in case we make changes or want to try different configurations. For example, our trading strategies are not aware of whether they actually communicate over the TWS API or whether they operate on stream replays. or in simulations. And finally, the EventLogger has eliminated most of the traditional application logging. Application code (or stream processing modules) generate events that are logged along with all other events and objects and, in case of the need for debugging, a rich set of hints and real objects exist to go back to. So when the Tick-by-Tick data delay question came up, we had enough “context” recorded over long periods of times to quickly come up with a good estimate. Hope that helps and makes sense, 闯ü谤驳别苍
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Re: TWS Closing
Your comment about Chromium is interesting, Mike, and could very well relate to your issues. What TWS calls the "JxBrowser" is actually a Google Chromium installation. On our Linux server, it is located at /tmp/JxBrowser and occupies in excess of 350MB per Chromium version. Apparently, different TWS versions need different Chromiums, our /tmp/Jxbrowser is shy of 800MB right now and has installations of versions 94.0.4606.113 and 96.0.4664.110 after I had started a TWS 10.12 and TWS 10.16 During startup (and restart), TWS makes a long series of HTTP GET and POST requests to IBKR sites as well as Google locations and the Chromium store (last time I looked at a network trace there were 120 of them). That may involve the download of the 350MB++ Chromium installation package and the Chromium installation. I am sure TWS will not (re)start properly or may even crash if something goes wrong in that process Maybe you can dig into those Chromium related errors some more (or have IBKR do that). 闯ü谤驳别苍 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Re: Obtaining more than 1,000 ticks of historical data
Erez Kaplan
Hi,
I use a simple but robust trick, which will get as many ticks as you like. start with one call and then at reqHistoricalDataEnd - change time and ask again. (Snippit - not complete) self.reqHistoricalData(self.hisID, self.contract, xDay, "1800 S", "1 secs", BID_TYPE, 0, 1, False, []) def historicalDataEnd(self, reqId: int, start: str, end: str): self.historicDate += datetime.timedelta(minutes=30) self.hisID += 1 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Re: TWS Closing
I don't see anything in the log file, granted I haven't ever needed to look in them before so not entirely sure what I would look for.? Attached a screenshot of my log file.?
When I get to my computer in the morning, I have an error from chromium.exe: unknown software exception.? Could this be related to TWS closing?? Seems like a an issue with Google Chrome?? Not totall sure what to make of this. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Obtaining more than 1,000 ticks of historical data
I wonder if it possible to obtain more than 1,000 ticks of historical data from IBKR with minimal efforts. We can try to use multiple times??reqHistoricalTicks but then we will get duplicate ticks, which require cleaning (with possible duplicates or omissions present). Any suggestions?
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Re: Timestamp is missing milliseconds in tickByTickAllLast
开云体育闯ü谤驳别苍 ? I was fascinated by your write-up abut ‘correcting’ the tick timestamps, but I can’t help wondering why you think this is worth doing at all. What benefit does it provide? ? I can see that if, for example, you’re building , say, 1-minute bars, then without this correction your bar that ends at 08:01:00 is incorrect because it doesn’t contain the final 36ms of data that the correct bar would contain. But does this matter? And if you do make the correction, the bar is correct but you don’t get it until 08:01:00.036, so any trading decision you make based on it is just as invalid as using the uncorrected timestamps. ? Of course many trading strategies don’t make any use of periodic bars, or volume bars or range-based bars, but that’s really irrelevant. Whatever generates your trading decisions, knowing that you should have made the decision 36ms ago doesn’t do anything to make the decisions better. ? About the only use I can think of is if you want to compare your data with someone else’s that also uses high-res timestamps, but even for that it won’t really help, because they’ll have a different set of sources of error, delays etc. So while the overall data series might be more closely aligned, individual ticks will still have a range of differences. ? Am I missing something? ? Richard |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Re: tickByTickAllLast returns tickType = 1 instead of 4
The parameter Rather, it is related to the 3 and 4 have dedicated callbacks whereas 1 and 2 both use with |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Re: tickByTickAllLast returns tickType = 1 instead of 4
Not sure I understand exactly what you are asking. Can you clarify. to and not to . It depends on the instrument and market condition. but the majority of trades take place at the current Ask price or the current Bid price. Below a snapshot from ESU2 this afternoon:
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||