¿ªÔÆÌåÓý

ctrl + shift + ? for shortcuts
© 2025 Groups.io
Date

Historical bars and WAP

 

Hi all,

Is WAP available for historical bars?

Looking at the API docs .

WAP is supposed to be present in the historical bars but I don't see it (I'm using ib_insync?Python module if that makes any difference). I can see WAP is returned in real-time bars but not in historical. I can see average. I wonder if that's basically WAP and just named differently than in the docs. I'm using IB Gateway?978.

{'average': 801.119,
?'barCount': 13,
?'close': 801.26,
?'date': datetime.datetime(2021, 2, 16, 16, 42, 30, tzinfo=datetime.timezone.utc),
?'high': 801.3,
?'low': 800.92,
?'open': 801.0,
?'volume': 20}


Also, I'm using 5 seconds bars in both cases (real-time and historical) and whatToShow='TRADES'.

Thanks in advance,
Alex


Re: Downloading Large Amounts of Historical Data

 

Richard,

Interesting thank you for the information.? I am currently trying to get 30 min bars at a 1 yr query, which might explain some of the issues I'm having as that's out of the sweetspots you listed...I'll shorten that up.? The API definitely has a lot of character, well worth the extra development time for the margin rates and investment options though imo.

While we're on the topic of historical data, do you know how clean IB keeps this data?? Ie do you actively check it for bad ticks, and how quickly do you apply corporate actions (splits, div adjustments, etc.)?

Alex,

I have heard of ib_sync but haven't gotten into it.? I was trying to interact directly only with the official API libraries as I was not sure how well the rest were maintained.

Mike


Re: Downloading Large Amounts of Historical Data

 

And if that¡¯s helpful, ib_insync Python module makes synchronous calls (well, they done exist but it wraps them in duchas way that they are blocking) to IBRK API super easy.

Alex

On Tue, 16 Feb 2021 at 09:59 J G via <windmill_1965=[email protected]> wrote:
On Tue, Feb 16, 2021 at 05:38 PM, <msaracena@...> wrote:

What approach can be taken to ensure the script waits until the current request is complete before moving to the next?


Your implementation seems to revolve around using time.sleep(). That is not the best approach as you don't know how long it will take for IB to respond to your request. IB will send an indication when it has sent you all data that you requested (or that it has available). You need to wait until you receive this message from IB and then move on to your next request.


Re: Source of the ticks timestamps

 

This is extremely?helpful. Thank you so much, Richard.

Am na I correction my conclusion below:
If the tickbytick limits are too low, the next best thing is to have real-time bars for complete trades (5 secs granularity) and reqMarketData for bid/ask which is whatever bid ask was when ticker was produced (that would be another things to confirm, I guess). ?

Thanks again,
Very helpful.
Alex

On Tue, 16 Feb 2021 at 08:57 Richard L King <rlking@...> wrote:

Realtime data returned by TWS from reqMktData() does not contain a timestamp. ib_insync itself adds the timestamp using your computer clock. So indeed there is no way at all to know when the tick was 'really' produced.

?

Data returned from reqTickByTick() DOES include a timestamp, but it's limited to a resolution of 1 second. I don't know offhand if it's added by the IB servers or by TWS, but I think this has been discussed before so try searching.

?

IBKR does not aggregate data returned from reqMktData: it samples it. If you compare the data returned from reqMktData with a source that definitely includes every tick, you find that every tick from reqMktData is in the full data, but not vice versa.

?

The sampling mechanism is very simple. Time is divided into fixed length intervals of around 300 millisecs for stocks and futures and 100 millisecs for forex. For any given instrument, the IBKR market data server records each tick as it arrives. At the end of each period it sends the current value for each tick type to TWS, but only if it is different from the value set to the user at the end of the previous period. To enable this to happen, it must also keep a record of what it sent last time.

?

This has implications: in particular, if the previous value for 'last price' was, say, 1500, and during the next period the exchange sends one or more trade reports with different values, then at the end of the period only the latest 'last price' is sent to the API, and if that happens to be the same as at the start of the period then no 'last price' is sent at all (but a 'last size' might be sent if that is different, and a 'volume' will be sent because that has increased). This mechanism explains how the reqMktData stream sometimes results in highs and lows of bars being incorrect by a tick or two: they just weren't sent because no change from the start of the 300ms period had been detexcted.

?

Given that there are many tens of thousands of TWS users at any one time, many of whom will have large numbers of tickers running, this might sound like a massive amount to keep track of, but it's actually quite simple (though this is speculation ¨C I don't know for certain exactly how IBKR manage this). If the time period length is 300 millseconds, you could imagine that the IBKR data farm has 3 servers for a particular stock or future. When a user requests data for that instrument, he is allocated to one of these 3 servers, and all his subsequent data (for that instrument) come from that server. Thus that server only needs to record one set of previously sent ticks, not one for each user. And that server now has 300 milliseconds ?to send out each tick to all the users it services, so the load is fairly constant and predictable. Obviously one server would serve data from many different instruments, and conversely each instrument's data would be disseminated to by multiple servers, but the whole thing can be sliced and diced and load-balanced in such a way that the tick data stream is never more than 300 ms behind the market.

?

When I first starting investigating this mechanism back in 2003, every market data line in TWS and every market data request via the API counted against your allocation of 100 tickers, even if you requested the same ticker more than once. This was presumably because the request was allocated to a different server, and in fact the data streams received were different.. But at some stage in the late noughties (I think) this was refined so that the same request from the same user was only counted once: so now, if I request data for a given instrument from several different API client programs, that is only counted as one market data line, and they all receive exactly the same data (and that even applies whether the data is requested via the live or the paper-trading account).

?

Sorry for the lengthy reply, but I think it helps to have a clear picture of what is going on here.

?

?

?

From: [email protected] <[email protected]> On Behalf Of Alex Gorbachev
Sent: 16 February 2021 04:45
To: [email protected]
Subject: [TWS API] Source of the ticks timestamps

?

Hi all.

?

This must be a very noob question...

I was trying to track the delay of real time data I get. I use the ib_insync Python module which makes things much easier in Python.

?

First of all, ping to? takes about 27ms.

?

For real-time bars I take my current system timestamp and then subtract?the bar time minus 5 seconds (bar size). That gives me 500-1000ms.

?

For market data ticks, I subscribe using?reqMktData and the difference between time in the ticker (including all ticks in it - they have the same timestamp) and system time when I get them is only about?0.2ms (that's it - 200 microseconds). Now, that makes me think that ticker timestamp produced through?reqMktData subscription is actually generated in TWS (or IB Gateway which is what I use) and not on IBRK servers. Right? Thus, there is no way to say when those ticks were really produced. Is there?

?

I haven't looked at TickByTick yet but I see that it has a very low limit on simultaneous subscriptions (3) so won't be useful for as many contracts as I need.

?

Also, could someone point me on how exactly IBRK samples or aggregates ticks for?reqMktData? Do I every time just get aggregated ticks by type and price since the last update?

?

Thanks,

Alex

?


Re: Downloading Large Amounts of Historical Data

 

¿ªÔÆÌåÓý

Mike

?

For each bar size allowed by the API, there seems to be a maximum 'sweet' duration: if that duration is exceeded, historical data requests take much longer to complete. This must be a policy decision rather than any technical limitation.

?

Below is a table extracted verbatim from my code, which summarises the maximum durations I use for each bar size. These durations were basically determined by trial and error. I don't know whether they're definitive, but they seem to work well for me. Of course there's nothing to prevent IB changing their implementation in any way they see fit at any time, so there's no guarantee that this will continue to be valid forever. Indeed? this table is massively different (and incomparably better) than a similar table I derived when I first started using the historical data api well over a decade ago.

?

Some of the entries in the table may surprise you: for example you can request 50 years (yes that's YEARS) worth of daily data for, say Microsoft, in a single request and the data is returned in less than 5 seconds. Since Microsoft hasn't been around that long, you get all the data IB has for them, which goes back to 1986: here's the very first bar, for 13 March 1986:

?

Bar date=19860313;Open=28.00;High=29.25;Low=25.50;Close=28.00;Volume=35826;WAP=28.002;Tick volume=1;

?

I used to swear at IB's historical data servers, but I'm very impressed with the performance now (though I think the API itself is a monstrosity).

?

So here's the table:

?

'?? Bar Size??????? Max Duration

'?? --------??????? ------------

'

'?? 1 secs????????? 2000 S

'?? 5 secs????????? 10000 S

'?? 10 secs???????? 20000 S

'?? 15 secs???????? 30000 S

'?? 30 secs???????? 86400 S

'?? 1 min?????????? 86400 S

'?????????????????? 6 D

'?????????????????? 1 W

'?? 2 mins????????? 86400 S

'?????????????????? 10 D

'?????????????????? 2 W

'?? 3 mins????????? 86400 S

'?????????????????? 10 D

'?????????????????? 2 W

'?? 5 mins????????? 86400 S

'?????????????????? 20 D

'?????????????????? 3 W

'?? 10 mins???????? 86400 S

'?????????????????? 50 D

'?????????????????? 8 W

'?? 15 mins???????? 86400 S

'?????????????????? 50 D

'?????????????????? 10 W

'?? 20 mins???????? 86400 S

'?????????????????? 50 D

'?????????????????? 10 W

'?? 30 mins???????? 86400 S

'?????????????????? 50 D

'?????????????????? 10 W

'?????????????????? 3 M

'?? 1 hour????????? 86400 S

'?????????????????? 50 D

'?????????????????? 10 W

'?????????????????? 3 M

'?? 2 hours???????? 86400 S

'?????????????????? 50 D

'?????????????????? 10 W

'?????????????????? 3 M

'?? 3 hours???????? 86400 S

'?????????????????? 50 D

'?????????????????? 10 W

'?????????????????? 3 M

'?? 4 hours???????? 86400 S

'?????????????????? 50 D

'?????????????????? 10 W

'?????????????????? 3 M

'?? 8 hours???????? 86400 S

'?????????????????? 50 D

'?????????????????? 10 W

'?????????????????? 3 M

'?? 1 day?????????? 86400 S

'?????????????????? 365 D

'?????????????????? 12 M

'?????????????????? 52 W

'?????????????????? 50 Y

'?? 1 W???????????? 86400 S

'?????????????????? 365 D

'?????????????????? 12 M

'?????????????????? 52 W

'?????????????????? 50 Y

'?? 1 M???????????? 86400 S

'?????????????????? 365 D

'?????????????????? 12 M

'?????????????????? 52 W

'?????????????????? 50 Y

?

Richard

?

From: [email protected] <[email protected]> On Behalf Of msaracena@...
Sent: 16 February 2021 15:41
To: [email protected]
Subject: Re: [TWS API] Downloading Large Amounts of Historical Data

?

Richard,

Thank you for the thoughts here, I will work on implementing this.

I am interested in what dictates the length of time for the data to be returned from IB?? In most of my requests I receive a years worth of data in about 2-4s, however periodically it takes as long as 30s.? Is this due to other data requests the server is handling?? Or is there some throttling happening?

J G,

Thank you, and yes that is a much better solution than an arbitrary sleep time.

Mike


Re: Downloading Large Amounts of Historical Data

 

Richard,

Thank you for the thoughts here, I will work on implementing this.

I am interested in what dictates the length of time for the data to be returned from IB?? In most of my requests I receive a years worth of data in about 2-4s, however periodically it takes as long as 30s.? Is this due to other data requests the server is handling?? Or is there some throttling happening?

J G,

Thank you, and yes that is a much better solution than an arbitrary sleep time.

Mike


Re: trouble requesting trade data *and* quote data for three futures tickers

Nick
 

Leaky Buckets typically discard entries if the rate is exceeded which is probably not what you want in a trading application.

If you are making a request you generally want it to be processed - preferably without delay but with a delay if necessary.

On 2/16/2021 2:23 AM, Ray Racine wrote:
A simple yet effective rate limiter (if you end up needing here) would be to push requests into a FIFO queue and have a thread pull them out and send them rate limited by a Leaky Bucket Rate Limiter.? Very simple algo.


Re: Downloading Large Amounts of Historical Data

 

On Tue, Feb 16, 2021 at 05:38 PM, <msaracena@...> wrote:

What approach can be taken to ensure the script waits until the current request is complete before moving to the next?


Your implementation seems to revolve around using time.sleep(). That is not the best approach as you don't know how long it will take for IB to respond to your request. IB will send an indication when it has sent you all data that you requested (or that it has available). You need to wait until you receive this message from IB and then move on to your next request.


Re: C# Positions

 

Sorry I guess I missed that you already did what I suggested in my previous message. There is an obvious error in your code sample since you call?AddTextBoxItemPosition() but implement?AddTextBoxItem(). Other than that it's impossible to tell what is going on in your app. In fact it's not even clear which results you are getting already and which you miss.


Re: C# Positions

 

Your EWrapper interface implementation must include a valid implementation of the callbacks that the TWS uses to communicate positions in response to ClientSocket.reqPositions(), which are EWrapper.position(...) for data and EWrapper.positionEnd() for end of data. You need to use these callbacks to handle the responses.


Re: Source of the ticks timestamps

Nick
 

¿ªÔÆÌåÓý

Just an additional tidbit, the sample interval is somewhat of a moving target. According to IB now says the interval is 250ms for stock and futures, 100ms for options and 5ms for fx pairs. But, you never know with IB what the current value actually is.

Richard has also made some detailed posts on historical data rate limits which are also a moving target and not always properly documented by IB.


On 2/16/2021 6:56 AM, Richard L King wrote:

Realtime data returned by TWS from reqMktData() does not contain a timestamp. ib_insync itself adds the timestamp using your computer clock. So indeed there is no way at all to know when the tick was 'really' produced.

?

Data returned from reqTickByTick() DOES include a timestamp, but it's limited to a resolution of 1 second. I don't know offhand if it's added by the IB servers or by TWS, but I think this has been discussed before so try searching.

?

IBKR does not aggregate data returned from reqMktData: it samples it. If you compare the data returned from reqMktData with a source that definitely includes every tick, you find that every tick from reqMktData is in the full data, but not vice versa.

?

The sampling mechanism is very simple. Time is divided into fixed length intervals of around 300 millisecs for stocks and futures and 100 millisecs for forex. For any given instrument, the IBKR market data server records each tick as it arrives. At the end of each period it sends the current value for each tick type to TWS, but only if it is different from the value set to the user at the end of the previous period. To enable this to happen, it must also keep a record of what it sent last time.

?

This has implications: in particular, if the previous value for 'last price' was, say, 1500, and during the next period the exchange sends one or more trade reports with different values, then at the end of the period only the latest 'last price' is sent to the API, and if that happens to be the same as at the start of the period then no 'last price' is sent at all (but a 'last size' might be sent if that is different, and a 'volume' will be sent because that has increased). This mechanism explains how the reqMktData stream sometimes results in highs and lows of bars being incorrect by a tick or two: they just weren't sent because no change from the start of the 300ms period had been detexcted.

?

Given that there are many tens of thousands of TWS users at any one time, many of whom will have large numbers of tickers running, this might sound like a massive amount to keep track of, but it's actually quite simple (though this is speculation ¨C I don't know for certain exactly how IBKR manage this). If the time period length is 300 millseconds, you could imagine that the IBKR data farm has 3 servers for a particular stock or future. When a user requests data for that instrument, he is allocated to one of these 3 servers, and all his subsequent data (for that instrument) come from that server. Thus that server only needs to record one set of previously sent ticks, not one for each user. And that server now has 300 milliseconds ?to send out each tick to all the users it services, so the load is fairly constant and predictable. Obviously one server would serve data from many different instruments, and conversely each instrument's data would be disseminated to by multiple servers, but the whole thing can be sliced and diced and load-balanced in such a way that the tick data stream is never more than 300 ms behind the market.

?

When I first starting investigating this mechanism back in 2003, every market data line in TWS and every market data request via the API counted against your allocation of 100 tickers, even if you requested the same ticker more than once. This was presumably because the request was allocated to a different server, and in fact the data streams received were different.. But at some stage in the late noughties (I think) this was refined so that the same request from the same user was only counted once: so now, if I request data for a given instrument from several different API client programs, that is only counted as one market data line, and they all receive exactly the same data (and that even applies whether the data is requested via the live or the paper-trading account).

?

Sorry for the lengthy reply, but I think it helps to have a clear picture of what is going on here.

?

?

?

From: [email protected] <[email protected]> On Behalf Of Alex Gorbachev
Sent: 16 February 2021 04:45
To: [email protected]
Subject: [TWS API] Source of the ticks timestamps

?

Hi all.

?

This must be a very noob question...

I was trying to track the delay of real time data I get. I use the ib_insync Python module which makes things much easier in Python.

?

First of all, ping to? takes about 27ms.

?

For real-time bars I take my current system timestamp and then subtract?the bar time minus 5 seconds (bar size). That gives me 500-1000ms.

?

For market data ticks, I subscribe using?reqMktData and the difference between time in the ticker (including all ticks in it - they have the same timestamp) and system time when I get them is only about?0.2ms (that's it - 200 microseconds). Now, that makes me think that ticker timestamp produced through?reqMktData subscription is actually generated in TWS (or IB Gateway which is what I use) and not on IBRK servers. Right? Thus, there is no way to say when those ticks were really produced. Is there?

?

I haven't looked at TickByTick yet but I see that it has a very low limit on simultaneous subscriptions (3) so won't be useful for as many contracts as I need.

?

Also, could someone point me on how exactly IBRK samples or aggregates ticks for?reqMktData? Do I every time just get aggregated ticks by type and price since the last update?

?

Thanks,

Alex

?



Re: Source of the ticks timestamps

 

¿ªÔÆÌåÓý

Realtime data returned by TWS from reqMktData() does not contain a timestamp. ib_insync itself adds the timestamp using your computer clock. So indeed there is no way at all to know when the tick was 'really' produced.

?

Data returned from reqTickByTick() DOES include a timestamp, but it's limited to a resolution of 1 second. I don't know offhand if it's added by the IB servers or by TWS, but I think this has been discussed before so try searching.

?

IBKR does not aggregate data returned from reqMktData: it samples it. If you compare the data returned from reqMktData with a source that definitely includes every tick, you find that every tick from reqMktData is in the full data, but not vice versa.

?

The sampling mechanism is very simple. Time is divided into fixed length intervals of around 300 millisecs for stocks and futures and 100 millisecs for forex. For any given instrument, the IBKR market data server records each tick as it arrives. At the end of each period it sends the current value for each tick type to TWS, but only if it is different from the value set to the user at the end of the previous period. To enable this to happen, it must also keep a record of what it sent last time.

?

This has implications: in particular, if the previous value for 'last price' was, say, 1500, and during the next period the exchange sends one or more trade reports with different values, then at the end of the period only the latest 'last price' is sent to the API, and if that happens to be the same as at the start of the period then no 'last price' is sent at all (but a 'last size' might be sent if that is different, and a 'volume' will be sent because that has increased). This mechanism explains how the reqMktData stream sometimes results in highs and lows of bars being incorrect by a tick or two: they just weren't sent because no change from the start of the 300ms period had been detexcted.

?

Given that there are many tens of thousands of TWS users at any one time, many of whom will have large numbers of tickers running, this might sound like a massive amount to keep track of, but it's actually quite simple (though this is speculation ¨C I don't know for certain exactly how IBKR manage this). If the time period length is 300 millseconds, you could imagine that the IBKR data farm has 3 servers for a particular stock or future. When a user requests data for that instrument, he is allocated to one of these 3 servers, and all his subsequent data (for that instrument) come from that server. Thus that server only needs to record one set of previously sent ticks, not one for each user. And that server now has 300 milliseconds ?to send out each tick to all the users it services, so the load is fairly constant and predictable. Obviously one server would serve data from many different instruments, and conversely each instrument's data would be disseminated to by multiple servers, but the whole thing can be sliced and diced and load-balanced in such a way that the tick data stream is never more than 300 ms behind the market.

?

When I first starting investigating this mechanism back in 2003, every market data line in TWS and every market data request via the API counted against your allocation of 100 tickers, even if you requested the same ticker more than once. This was presumably because the request was allocated to a different server, and in fact the data streams received were different.. But at some stage in the late noughties (I think) this was refined so that the same request from the same user was only counted once: so now, if I request data for a given instrument from several different API client programs, that is only counted as one market data line, and they all receive exactly the same data (and that even applies whether the data is requested via the live or the paper-trading account).

?

Sorry for the lengthy reply, but I think it helps to have a clear picture of what is going on here.

?

?

?

From: [email protected] <[email protected]> On Behalf Of Alex Gorbachev
Sent: 16 February 2021 04:45
To: [email protected]
Subject: [TWS API] Source of the ticks timestamps

?

Hi all.

?

This must be a very noob question...

I was trying to track the delay of real time data I get. I use the ib_insync Python module which makes things much easier in Python.

?

First of all, ping to? takes about 27ms.

?

For real-time bars I take my current system timestamp and then subtract?the bar time minus 5 seconds (bar size). That gives me 500-1000ms.

?

For market data ticks, I subscribe using?reqMktData and the difference between time in the ticker (including all ticks in it - they have the same timestamp) and system time when I get them is only about?0.2ms (that's it - 200 microseconds). Now, that makes me think that ticker timestamp produced through?reqMktData subscription is actually generated in TWS (or IB Gateway which is what I use) and not on IBRK servers. Right? Thus, there is no way to say when those ticks were really produced. Is there?

?

I haven't looked at TickByTick yet but I see that it has a very low limit on simultaneous subscriptions (3) so won't be useful for as many contracts as I need.

?

Also, could someone point me on how exactly IBRK samples or aggregates ticks for?reqMktData? Do I every time just get aggregated ticks by type and price since the last update?

?

Thanks,

Alex

?


Re: Downloading Large Amounts of Historical Data

 

¿ªÔÆÌåÓý

First, use a different tickerID for each request.

?

Second, don't wait for the result for one request before issuing the next. Just process the results as they arrive. They will quite likely arrive in a different order than the requests. The different ticker ids indicate which is which (you obviously have to maintain a map from ticker id to contract).

?

Third, bear in mind that the API allows a number of concurrent historical data requests. I'm not sure offhand what the limit is, but I think it's 50. I tend to limit it to about 20, because I find you get diminishing returns with higher concurrency.

?

Fourth, bear in mind that the API limits you to 50 input messages (ie API requests of all types) per second. Note also that that limit applies across all API clients currently running, ie 50 per second in total, not 50 per second per client.

?

Put all that together, and you need to end up with something like this:

?

  • A queue of requests.
  • Something that adds requests to the queue, but need not be rate-limited.
  • Something that takes requests off the queue and submits them, each with a unique ticker id, and keeps count of the outstanding requests to ensure that not more than, say, 20, are concurrently in progress, and that not more than, say, 30 requests are submitted per second.
  • Something that handles the resulting historical data callbacks, uses the ticker id to associate the results with the relevant contract, and decrements the count of outstanding requests as each one completes.

?

This approach is non-trivial to code, but will give you maximum throughput.

?

The simpler approach is indeed just to wait for each request to complete before submitting the next. But you don't need to actually make a thread sleep. Just send the next request when you receive a historical data end callback.

?

Richard

?

?

From: [email protected] <[email protected]> On Behalf Of msaracena@...
Sent: 16 February 2021 04:05
To: [email protected]
Subject: [TWS API] Downloading Large Amounts of Historical Data

?

My strategy requires access to large amounts of historical data - ~1000 symbols at 10 min bar resolution for 3 years that I save locally in csv.? I am pulling data through the reqHistoricalData method in 1 year batches and concatenating the resulting queries. Periodically I get "ERROR 1 322 Error processing request.-'bT' : cause - Duplicate ticker ID for API historical data query", because my script is moving to the next data request before the current one is complete.?

What approach can be taken to ensure the script waits until the current request is complete before moving to the next?

app.reqHistoricalData(ticker_id, contract, query_end_iteration.strftime('%Y%m%d %H:%M:%S') + ' EST', query_period,
?????????????????????
bar_size, "MIDPOINT", 1, 2, False, [])? # returns lol of data

while not
app.data:? # pauses until data is return from server.
???
time.sleep(0.5)

time.sleep(
3)? # once out of loop, give it time to load all the data before writing to pd


Source of the ticks timestamps

 

Hi all.

This must be a very noob question...
I was trying to track the delay of real time data I get. I use the ib_insync Python module which makes things much easier in Python.

First of all, ping to? takes about 27ms.

For real-time bars I take my current system timestamp and then subtract?the bar time minus 5 seconds (bar size). That gives me 500-1000ms.

For market data ticks, I subscribe using?reqMktData and the difference between time in the ticker (including all ticks in it - they have the same timestamp) and system time when I get them is only about?0.2ms (that's it - 200 microseconds). Now, that makes me think that ticker timestamp produced through?reqMktData subscription is actually generated in TWS (or IB Gateway which is what I use) and not on IBRK servers. Right? Thus, there is no way to say when those ticks were really produced. Is there?

I haven't looked at TickByTick yet but I see that it has a very low limit on simultaneous subscriptions (3) so won't be useful for as many contracts as I need.

Also, could someone point me on how exactly IBRK samples or aggregates ticks for?reqMktData? Do I every time just get aggregated ticks by type and price since the last update?

Thanks,
Alex


Downloading Large Amounts of Historical Data

 

My strategy requires access to large amounts of historical data - ~1000 symbols at 10 min bar resolution for 3 years that I save locally in csv.? I am pulling data through the reqHistoricalData method in 1 year batches and concatenating the resulting queries. Periodically I get "ERROR 1 322 Error processing request.-'bT' : cause - Duplicate ticker ID for API historical data query", because my script is moving to the next data request before the current one is complete.?

What approach can be taken to ensure the script waits until the current request is complete before moving to the next?

app.reqHistoricalData(ticker_id, contract, query_end_iteration.strftime('%Y%m%d %H:%M:%S') + ' EST', query_period,
bar_size, "MIDPOINT", 1, 2, False, []) # returns lol of data

while not app.data: # pauses until data is return from server.
time.sleep(0.5)

time.sleep(3) # once out of loop, give it time to load all the data before writing to pd


C# Positions

 

Trying to get current position (#'s of contracts) into a textbox. Working in c#.

Added this to Form1.cs? "ibClient.ClientSocket.reqPositions();"
Added this to EWrapperImpl.cs in the position method code "myform.AddTextBoxItemPosition(pos);"
Added this to Form1.cs - public void AddTextBoxItem(double pos)
? ? ? ? {
? ? ? ? ? ? if (this.tbConNumber.InvokeRequired)
? ? ? ? ? ? {
? ? ? ? ? ? ? ? SetTextCallback d = new SetTextCallback(AddTextBoxItem);
? ? ? ? ? ? ? ? this.Invoke(d, new object[] { pos });
? ? ? ? ? ? }
? ? ? ? ? ? else
? ? ? ? ? ? ? ? Convert.ToInt32(pos);
? ? ? ? ? ? {
? ? ? ? ? ? ? ? this.tbConNumber.Text = pos;
? ? ? ? ? ? }
? ? ? ? }
?Obviously I don't know what I am doing, I have somehow managed to get a connection, data, place orders but without the "positions" I am stuck.
Any help is appreciated.
Thanks


Re: trouble requesting trade data *and* quote data for three futures tickers

 

A simple yet effective rate limiter (if you end up needing here) would be to push requests into a FIFO queue and have a thread pull them out and send them rate limited by a Leaky Bucket Rate Limiter.? Very simple algo.

On Sat, Feb 6, 2021 at 10:39 AM Dmitry Shevkoplyas <shevkoplyas@...> wrote:
Taylor,

[no sleep... ever]
Inside your processMessages() I see "sleep for 15 seconds". What does the rest of the code doing during that blocking sleep() call? Right - nothing. Any incoming quotes, order updates, etc. Nothing will be processed, just filling?up buffers (or probably overfilling them, since you will sleep 3 times by 15 seconds) while you decided to sleep in the middle of the busy working day!-)
I guess you need to re-think your implementation so that the sleep() is neer used anywhere. Also you don't need to?

[tick-by-tick is s stream]
You mentioned "no more than 1 tick-by-tick request can be made for the same instrument within 15 seconds". Why do you want to send?reqTickByTickData() request for the same instrument >1 time?? One time is enough to subscribe for stream of updates (which will be delivered to you in form of the documented callbacks) until?you decide to cancelTickByTickData. If you don't cancel - it will keep you "subscribed" for that instrument?stream for the whole day.

[outgoing request rate control]
You probably want to avoid sending >50 requests to IB in any given second. If you're interested?only in those 3 contracts, no need to worry about outgoing request rate control, just send them all 3 asap (once only). For more serious application where you might have implemented some scanners and potentially tons of contract details requests and subscribing/cancelling other types of IB data, you might want to keep an eye on the rate of the outgoing requests, but even if you see it is getting close to 50 in current second interval, you do not block your program, but merely skipping sending request until it is safe to do so. For this you probably want to implement some "outbox" queue (in form of some container) with your requests objects and then you only allow "shoveler" to process that queue with a limited rate. Even more: for some requests there are special request limits (like for historical data requests), then you probably will need to implement different queue for those and different shoveling rules (own rate) for that queue.

Cheers,
Dmitry Shevkoplyas


On Fri, Feb 5, 2021 at 4:10 PM Nick <news1000@...> wrote:

The default is 100 concurrent reqMktData streams but only 3 reqTickByTick streams. Also, market depth is limited to 3 concurrent.

If you are getting the error for more than one request per 15 seconds that's probably what is happening. IB does screw up sometimes but usually it's a problem on the client side.

It would be useful to log all your activity or enable API logging so you can see what is actually being received by TWS.


On 2/5/2021 3:45 PM, tbrown122387@... wrote:
I also read "[b]y default, every user has a maxTicker Limit of 100 market data lines and as such can obtain the real time market data of up to 100 instruments simultaneously." 3 instruments is less than 100, and 6 (=3x2) requests is less than 100, so I think I'm good there.


IB's Stop Price Vs Trailing Amount

 

Any ideas on these two items¡¯ accurate/exact meanings ?

Could i use the?Stop Price as a profit taking order ?


Re: Fx data

 

I have not worked with FX instruments, but the reqHeadTimestamp() API call should give you an idea of the earliest data you can expect (according to the documentation).

But don't expect to get data all the way back to what reqHeadTimestamp() indicates. I always read it as the "you will never get data older than that" timestamp. Some instruments reach back that far and some don't.


Re: Fx data

 

There probably isn't data available any further back,?at least not in a form that you can easily grab. For most things, IB only goes back to around then. You could also buy historical fx data and similar from several sources, but?probably won't get any earlier than around 2005, either.? Historical data for the particular symbol I'm most concerned with only goes back to 2008, although it's been traded a lot longer than that.?


On Mon, Feb 15, 2021 at 8:48 AM <ghelie@...> wrote:
Hi,

I was trying to pull historical fx data yesterday and I was wondering if the data does not go back further than 2005 or if I did something incorrectly. I tried to pull all the fx data for around 7 currency pairs,
and for each of them, the gateway stopped returning bars around 2005.

Thanks