¿ªÔÆÌåÓý

ctrl + shift + ? for shortcuts
© 2025 ¿ªÔÆÌåÓý
Date

Re: ReqHistoricalTicks() & Futures Data Limitations, Rolling Expiring Contracts

 

My tool is extremely similar to yours, so no bottlenecks in code either.

On Sun, Oct 1, 2023 at 8:07 PM Brendan Lydon <blydon12@...> wrote:
I am doing this from my sim account. I should probably switch to my live account to hopefully get better times?

On Sun, Oct 1, 2023 at 7:04 PM ´³¨¹°ù²µ±ð²Ô Reinold via <TwsApiOnGroupsIo=[email protected]> wrote:
Just after 15:10 US/Central this afternoon, I request Historical TickByTickLast data for Friday's NQZ3 session (20230928 17:00 through 20230929 16:00 US/Central):
  • I received 451,009 TickByTickLast objects for NQZ3
  • It took 445 requests/responses and 1,045 seconds to receive all data
  • that is an average of 2.347s per request (elapsed time)
  • but the median was sub-second at 0.659s.
We don't download historical data a lot so we did not put a lot of thought into the little tool:
  • it is a single threaded event processor (no sleeps, delays, or built-in pacing)
  • requests data in 1,000 tick chunks in reverse time order
  • converts each returned TickByTickLast into a relatively expensive immutable Java object
  • accumulates the objects in a list that is serialized into streamable Json objects and written to file each time more than 10,000 ticks have been accumulated

That means the tool makes about 9 out of 10 requests immediately (a few micro seconds) after the callback for the previous request. There is a short processing delay before every tenth request (5ms to 40ms) for the data serialization and file storage.

Now, the interesting finding is the discrepancy between average and median response times. IBKR paced the responses within chunks of ~60 seconds, each time with roughly the following rhythm:

  • Ten requests with response times around 600ms each
  • One request with 3.5 seconds
  • Three requests with 600ms each
  • One request with 4.5 seconds
  • One request with 600ms
  • One request with 5.5 seconds
  • Three requests with 6 seconds
  • One request with 12 seconds

Attached a couple charts of what that looked like. I do not have data on how long they keep that 60 second chunk rhythm up in case you download a couple years worth of data. My gut tells me that you will not be able to keep up the less-than 3 second average for very long runs.

´³¨¹°ù²µ±ð²Ô




On Sun, Oct 1, 2023 at 01:39 PM, <blydon12@...> wrote:
Running a script right now to get 2 years of tick data for NQ. Seems to be restricting my request limits to every 6 seconds. Are there times when this could improve? It is Sunday @ 2:30 p.m. where I am right now for reference.


Re: ReqHistoricalTicks() & Futures Data Limitations, Rolling Expiring Contracts

 

I am doing this from my sim account. I should probably switch to my live account to hopefully get better times?

On Sun, Oct 1, 2023 at 7:04 PM ´³¨¹°ù²µ±ð²Ô Reinold via <TwsApiOnGroupsIo=[email protected]> wrote:
Just after 15:10 US/Central this afternoon, I request Historical TickByTickLast data for Friday's NQZ3 session (20230928 17:00 through 20230929 16:00 US/Central):
  • I received 451,009 TickByTickLast objects for NQZ3
  • It took 445 requests/responses and 1,045 seconds to receive all data
  • that is an average of 2.347s per request (elapsed time)
  • but the median was sub-second at 0.659s.
We don't download historical data a lot so we did not put a lot of thought into the little tool:
  • it is a single threaded event processor (no sleeps, delays, or built-in pacing)
  • requests data in 1,000 tick chunks in reverse time order
  • converts each returned TickByTickLast into a relatively expensive immutable Java object
  • accumulates the objects in a list that is serialized into streamable Json objects and written to file each time more than 10,000 ticks have been accumulated

That means the tool makes about 9 out of 10 requests immediately (a few micro seconds) after the callback for the previous request. There is a short processing delay before every tenth request (5ms to 40ms) for the data serialization and file storage.

Now, the interesting finding is the discrepancy between average and median response times. IBKR paced the responses within chunks of ~60 seconds, each time with roughly the following rhythm:

  • Ten requests with response times around 600ms each
  • One request with 3.5 seconds
  • Three requests with 600ms each
  • One request with 4.5 seconds
  • One request with 600ms
  • One request with 5.5 seconds
  • Three requests with 6 seconds
  • One request with 12 seconds

Attached a couple charts of what that looked like. I do not have data on how long they keep that 60 second chunk rhythm up in case you download a couple years worth of data. My gut tells me that you will not be able to keep up the less-than 3 second average for very long runs.

´³¨¹°ù²µ±ð²Ô




On Sun, Oct 1, 2023 at 01:39 PM, <blydon12@...> wrote:
Running a script right now to get 2 years of tick data for NQ. Seems to be restricting my request limits to every 6 seconds. Are there times when this could improve? It is Sunday @ 2:30 p.m. where I am right now for reference.


Re: ReqHistoricalTicks() & Futures Data Limitations, Rolling Expiring Contracts

 

Just after 15:10 US/Central this afternoon, I request Historical TickByTickLast data for Friday's NQZ3 session (20230928 17:00 through 20230929 16:00 US/Central):
  • I received 451,009 TickByTickLast objects for NQZ3
  • It took 445 requests/responses and 1,045 seconds to receive all data
  • that is an average of 2.347s per request (elapsed time)
  • but the median was sub-second at 0.659s.
We don't download historical data a lot so we did not put a lot of thought into the little tool:
  • it is a single threaded event processor (no sleeps, delays, or built-in pacing)
  • requests data in 1,000 tick chunks in reverse time order
  • converts each returned TickByTickLast into a relatively expensive immutable Java object
  • accumulates the objects in a list that is serialized into streamable Json objects and written to file each time more than 10,000 ticks have been accumulated

That means the tool makes about 9 out of 10 requests immediately (a few micro seconds) after the callback for the previous request. There is a short processing delay before every tenth request (5ms to 40ms) for the data serialization and file storage.

Now, the interesting finding is the discrepancy between average and median response times. IBKR paced the responses within chunks of ~60 seconds, each time with roughly the following rhythm:

  • Ten requests with response times around 600ms each
  • One request with 3.5 seconds
  • Three requests with 600ms each
  • One request with 4.5 seconds
  • One request with 600ms
  • One request with 5.5 seconds
  • Three requests with 6 seconds
  • One request with 12 seconds

Attached a couple charts of what that looked like. I do not have data on how long they keep that 60 second chunk rhythm up in case you download a couple years worth of data. My gut tells me that you will not be able to keep up the less-than 3 second average for very long runs.

´³¨¹°ù²µ±ð²Ô




On Sun, Oct 1, 2023 at 01:39 PM, <blydon12@...> wrote:
Running a script right now to get 2 years of tick data for NQ. Seems to be restricting my request limits to every 6 seconds. Are there times when this could improve? It is Sunday @ 2:30 p.m. where I am right now for reference.


Re: APIPending status?

 

First of all, order status ApiPending is not an error condition. It is the state of your order(s) right after you placed them. And the section of the does provide high-level descriptions of when to expect the various order stats. They say for :

ApiPending - Indicates order has not yet been sent to IB server, for instance if there is a delay in receiving the security definition. Uncommonly received.

I'd review the contract definition and you should seriously think about requesting a well configured contract from IBKR via instead of initializing various fields yourselves. For example, I am not sure why you set a "strike" for am STK instrument, and initialize trading class, symbol, and local symbol will have important fields initialized with meaningful values.

And then you use a Time in Force of IOT. Does the behavior change if you place the order with a simpler TIF of, say GTC or DAY?

´³¨¹°ù²µ±ð²Ô

On Sun, Oct 1, 2023 at 03:22 PM, Colin Beveridge wrote:
I'm now having a seemingly identical problem using IB's python client -- I can only imagine I'm doing something boneheaded, but without documentation on the error, I'm a bit stuck.

Running on v177, I set up an order like so:

??????? con = Contract()
??????? con.conId = 76792991
??????? con.symbol = "TSLA"
??????? con.secType = "STK"
??????? con.strike = 0.
??????? con.exchange = "SMART"
??????? con.primaryExchange = "NASDAQ"
??????? con.currency = "USD"
??????? con.tradingClass = "NMS"
??????? con.localSymbol = "TSLA"

??????? order = Order()
??????? order.action = "BUY"
??????? order.totalQuantity = 1.
??????? order.orderType = "MKT"
??????? order.tif = "IOC"
??????? order.account = "[account name]"
??????? app.placeOrder([order id], con, order)

This sends:

3-[order id]-76792991-TSLA-STK--0.0---SMART-NASDAQ-USD-TSLA-NMS---BUY-1.0-MKT---IOC--[account name]--0--1-0-0-0-0-0-0-0--0-------0---1-0---0---0-0--0------0-----0-----------0---0-0---0--0-0-0-0--1.7976931348623157e+308-1.7976931348623157e+308-1.7976931348623157e+308-1.7976931348623157e+308-1.7976931348623157e+308-0----1.7976931348623157e+308-----0-0-0--2147483647-2147483647-0---

... but a call to openOrders gives

OrderStatus. Id: [order id] Status: ApiPending Filled: 0 Remaining: 1 AvgFillPrice: 0 PermId: 0 ParentId: 0 LastFillPrice: 0 ClientId: 3 WhyHeld:? MktCapPrice: 0

As I say, I imagine I'm setting something up incorrectly. If anyone can point me towards what (or even where to look), I'd be super grateful.


Re: APIPending status?

Colin Beveridge
 

I'm now having a seemingly identical problem using IB's python client -- I can only imagine I'm doing something boneheaded, but without documentation on the error, I'm a bit stuck.

Running on v177, I set up an order like so:

??????? con = Contract()
??????? con.conId = 76792991
??????? con.symbol = "TSLA"
??????? con.secType = "STK"
??????? con.strike = 0.
??????? con.exchange = "SMART"
??????? con.primaryExchange = "NASDAQ"
??????? con.currency = "USD"
??????? con.tradingClass = "NMS"
??????? con.localSymbol = "TSLA"

??????? order = Order()
??????? order.action = "BUY"
??????? order.totalQuantity = 1.
??????? order.orderType = "MKT"
??????? order.tif = "IOC"
??????? order.account = "[account name]"
??????? app.placeOrder([order id], con, order)

This sends:

3-[order id]-76792991-TSLA-STK--0.0---SMART-NASDAQ-USD-TSLA-NMS---BUY-1.0-MKT---IOC--[account name]--0--1-0-0-0-0-0-0-0--0-------0---1-0---0---0-0--0------0-----0-----------0---0-0---0--0-0-0-0--1.7976931348623157e+308-1.7976931348623157e+308-1.7976931348623157e+308-1.7976931348623157e+308-1.7976931348623157e+308-0----1.7976931348623157e+308-----0-0-0--2147483647-2147483647-0---

... but a call to openOrders gives

OrderStatus. Id: [order id] Status: ApiPending Filled: 0 Remaining: 1 AvgFillPrice: 0 PermId: 0 ParentId: 0 LastFillPrice: 0 ClientId: 3 WhyHeld:? MktCapPrice: 0

As I say, I imagine I'm setting something up incorrectly. If anyone can point me towards what (or even where to look), I'd be super grateful.


Re: ReqHistoricalTicks() & Futures Data Limitations, Rolling Expiring Contracts

 

Running a script right now to get 2 years of tick data for NQ. Seems to be restricting my request limits to every 6 seconds. Are there times when this could improve? It is Sunday @ 2:30 p.m. where I am right now for reference.


Re: C++ preventing EReader reading when socket is closed

 

On Sun, Oct 1, 2023 at 03:15 PM, David Armour wrote:

I agree with everything you said although I am not entirely happy with my final solution. I do not feel that destroying the EReader object is the correct way to disconnect. It is certainly not intuitive which means I cannot be the only person finding this problem. Is everybody using C++ just ignoring this error?

You may see this as a wart but I, personally, won't comment with regard to how intuitive things are or any design aesthetic. I will say that consideration should be given to backward compatibility and just how much existing code would break (and therefore need to change) if the disconnect where removed from the destructor.

Just some food for thought... since there's usually ripple effects, unintended consequences, and API stability is actually very important. Also, I don't think it's wise to hold off fixing some simple, obvious, and existing issue merely because there's an intention of addressing it in a more significant re-design. As they say, there is no time like the present and I appreciate small and tractable, incremental changes.

However, this may come down to a judgment call. Since, I guess, people have indeed been ignoring it or working around it (or it's simply gone unnoticed). After all, it only happens late in the game when there usually isn't much network activity expected or necessarily happening.

Anyway, I will sleep on it one more night before submitting a bug report.

Is there a special place to report API bugs or do we just raise ticket as usual?

A pull request is probably the best way to go about this:


Re: C++ preventing EReader reading when socket is closed

 

I agree with everything you said although I am not entirely happy with my final solution. I do not feel that destroying the EReader object is the correct way to disconnect. It is certainly not intuitive which means I cannot be the only person finding this problem. Is everybody using C++ just ignoring this error?

Anyway, I will sleep on it one more night before submitting a bug report.

Is there a special place to report API bugs or do we just raise ticket as usual?


Re: C++ preventing EReader reading when socket is closed

 

On Sun, Oct 1, 2023 at 10:02 AM, David Armour wrote:

Thanks for your comment Buddy.

You're welcome. I think you've found an interesting oversight.

I agree that the logic would seem that setting m_isAlive to false during the ~EReader() call should prevent the code trying to read from the socket.

Yeah, superficially and at first blush it would seem that way. But it'd be a mistake to think so. I can understand how that initial thought may have fooled the original author into thinking the code was correct though. All this is post-mortem speculation on my part however.

Nevertheless, once considered more thoughtfully, it becomes apparent that nothing prevents the destructor from being called while the processing thread's readToQueue is in the while block.

The std::atomic will only insure that reading and writing m_isAlive from multiple threads is well-defined; but guarantees nothing w.r.t. the destructor and what's happening in the while loop.

I tried playing around with that. If I pause after m_isAlive is set to false, then continue, the code works successfully without 509 error. It is as if at

"Pause", with a sleep or something? No... that's surely a game of whack-a-mole. It might work sometimes, depending on unpredictable factors, and might not.

It can be fun to experiment with the timing while you investigate though. You can also try playing w/ ordering by running the code on a single core/cpu and see how that affects things.

full speed the atomic operation is not being completed before the call to eDisconnect() happens. On Visual Studio this even happens when I turn off all Optimisations. As eDisconnect() does not use anything related to that atomic, I wonder if the compiler is parallelising the operation as all atomic operations are very slow.

Compiler optimizations fall somewhat into the category of "unpredictable factors" I alluded to (although they'd be deterministic given the compiler source and enough work). It's a bit surprising that a critical section wasn't implemented via mutex, but maybe the author wanted to avoid a performance penalty.

I also noticed that eDisconnect() sets m_fd = -1.? This should prevent the call to processNonBlockingSelect() from trying to call onReceive() which ultimately triggers receive() and recv() causing the error. Again the setting of the atomic variable is not being done in time.

I am still scratching my head over this. I suspect some synchronization issue between the various threads.

Anyway... I think you found a legitimate problem and I wouldn't worry about it too much. I like your proposed solution as well. Joining the thread and then disconnecting makes clear sense to me. And, as long as m_pClientSocket isn't being shared all over the place I don't see a reason to use more complicated locking via mutex.

If I were the reviewer I could see signing off after some confirmation tests... open a ticket and see what they have to say. If you submit a patch remember to change the IB_POSIX ifdef block too :-)


Re: C++ preventing EReader reading when socket is closed

 

Thanks for your comment Buddy.

I agree that the logic would seem that setting m_isAlive to false during the ~EReader() call should prevent the code trying to read from the socket.

I tried playing around with that. If I pause after m_isAlive is set to false, then continue, the code works successfully without 509 error. It is as if at full speed the atomic operation is not being completed before the call to eDisconnect() happens. On Visual Studio this even happens when I turn off all Optimisations. As eDisconnect() does not use anything related to that atomic, I wonder if the compiler is parallelising the operation as all atomic operations are very slow.

I also noticed that eDisconnect() sets m_fd = -1.? This should prevent the call to processNonBlockingSelect() from trying to call onReceive() which ultimately triggers receive() and recv() causing the error. Again the setting of the atomic variable is not being done in time.

I am still scratching my head over this. I suspect some synchronization issue between the various threads.


Re: C++ preventing EReader reading when socket is closed

 

I guess the assumption is that using std::atomic<bool> m_isAlive; turns the following into a critical section:

void EReader::readToQueue() {
	//EMessage *msg = 0;

	while (m_isAlive) {
		if (m_buf.size() == 0 && !processNonBlockingSelect() && m_pClientSocket->isSocketOK())
			continue;

        if (!putMessageToQueue())
			break;
	}

But it does not, so I take your point.


Re: C++ preventing EReader reading when socket is closed

 

After much deliberation, I think I figured out the problem.

In my opinion it is a bug in the TWS API for C++.

I would like someone's help to go thru' my logic and confirm it.

The EReader::~EReader() destructor closes the socket by calling eDisconnect() then it waits for the thread to complete by calling WaitForSingleObject(m_hReadThread, INFINITE);

In my opinion this is wrong. Why would you want to disconnect the socket when you have a thread running which is potentially calling recv() on the same socket? In my view, we have to wait for the thread to finish then disconnect the socket.

My fix to the problem is to swap the two steps, i.e.

??? if (m_hReadThread) {
??? ? ? ?? m_isAlive = false;
?? ??? ??? m_pClientSocket->eDisconnect();
? ?? ????? WaitForSingleObject(m_hReadThread, INFINITE);
?? ???? }
becomes

??? if (m_hReadThread) {
????????? m_isAlive = false;
? ? ????? WaitForSingleObject(m_hReadThread, INFINITE);
?? ?????? m_pClientSocket->eDisconnect();
?? ???? }

This has resolved the issue.

Could someone confirm my logic so I can issue a bug report to the IBKR guys?

Thanks


C++ preventing EReader reading when socket is closed

 

I have faced a problem with my code for a long time that only occurs during the call to EClientSocket::eDisconnect()

I have a separate message processing thread running which looks like this:
????? ftrMsgProcThrd_ = pool_->submit(
????????? [&]()
????????? {
??????????? while (clientSocket_->isConnected())
??????????? {
????????????? signal_.waitForSignal();?? // This waits 2 seconds.
????????????? reader_->processMsgs();
??????????? }
????????? });

I decided to tackle this annoying bug (not the first time) and have found that after the call to EClientSocket::eDisconnect() which calls EClientSocket::SocketClose() which just calls a Windows Sockets closesocket() on the open socket, I am still getting the message processing thread (EReader thread) trying to perform a Windows Sockets recv() messages on the closed socket resulting in a 509 error. I have traced that error to be socket error 10038 which confirms it is an invalid socket (in this case, a closed socket).

Before the line "reader_->processMsgs()"? I have tried checking for the socket still being open with if(clientSocket_->isConnected()) but it does not solve the problem. The EReader is running in its own thread as per the reader_->start() call.

I thought, perhaps I need to close the EReader before calling eDisconnect so I tried deleting the object and removing the call to eDisconnect() because the destructor of the EReader calls eDisconnect() itself but this does not fix the error. i still get the 509 caused by a read on the closed socket.

I am struggling here and would appreciate advise from any C++ coders that use a multi-threaded approach like the above. It is likely a threading issue but if anyone else has faced a similar "disconnect" issue I would be happy to hear what you did to resolve it.


Re: TWS api multiple similar orders submission delays

 

"200ms-400ms? delay in between each order is transmitted by Gateway and has?Submitted status"
Do you wait for the "Submit" status before submitting your next order? Can you submit orders without waiting for the status update? How do you measure time?
I placed 8 orders (3 bracket orders) for 8 different stocks and the avg time per stock (i.e. 8 orders) was 3 msecs. I have seen some stocks (e.g. NFLX) take 5-8 msecs. I do not wait for the submit status, and by the time I am done placing 64 orders, avg of ~25 msecs have passed (and this includes time from TWS callbacks and my own logic).
Not sure how much difference does it make but I tested with very liquid US stks, am using C++ and TWS GUI, tested within 1/2 hour of market open.?


Re: IB's own SAMPLE Excel files still refer to de-supported order attributes so they don't work at all -- how do I bypass / fix??

 

Thank you for taking the time to step through the VBA code. If I'm understanding correctly, you're saying that the problem exists in the separate ddedll.dll library file, which is a black box / unmodifiable by the end user?

As for TWS / API versioning, I'm using:
  • A laptop on which I just did a fresh reinstall of Windows
  • A newly-downloaded TWS (version 10.25.1j, which is their 'LATEST' build, but I've also tried the most recent STABLE and BETA builds, same result)
  • A newly-downloaded TWS API version 10.25.01?(which is LATEST, but also tested STABLE, which is v 10.19.01, same result)
  • And the Excel file that generates the NBBO error is the default 'LegacyTwsDde.xls' file that gets downloaded along with the API. I literally just open it in its default state, put my username is cell D5 of the Basic Orders sheet, and press the 'Place / Modify Order' button on one of the default populated rows (e.g. IBM, row 15)...and it throws the NbboPriceCap error in cell J6, as shown in my screenshot.
IB support has thus far just continued to assure me that simply updating my TWS and API will solve everything, despite my protestations that it has plainly not.

They've acknowledged that the 3 order attributes were indeed desupported in API v 10.10 (as documented here:?), but the Q I can't get to the bottom of is what is it about the LegacyTwsDde.xls Sample file that's generate the NbboPriceCap error and preventing orders from getting placed? Like it's all fine and well to "de-support" order attributes, but could it simply be the case that the developers forgot to update the LegacyTwsDde Sample file, which is coded in such a way as to be submitting orders with 3 order attributes that no longer exist, and so something in the chain from Excel > API > TWS is flat-out rejecting these orders as containing, essentially, 'gibberish'? And -- if I'm understanding correctly -- there's nothing that I as the end user can easily modify in the Sample file's VBA to simply not include these now-desupported attributes? (And if that's true then...just?where/when are these attributes getting instructed?)

These all seem like issues IB's API team should understand immediately, but...well, I'm here trying to diagnose from afar because I haven't been making any progress with them.


Re: did something change on 9/21/22 getting 366 on a feed that "WAS" working well.

 

´³¨¹°ù²µ±ð²Ô wrote: "TWS/IBGW actually memorizes the highest orderId for each clientId and each account. I did not know this either for the longest time, but I think it was JG who made a comment related to this in a post a couple months ago."

Your memory has not failed you: it was indeed me who wrote on a few occasions that TWS/IBGW memorizes the highest orderId for each clientId.


Re: Erratic results from live data requests

 

I just posted about RT Vol and bad ticks.? The last 2 days have been bad but still usable with filtering out extreme values for price.


Re: did something change on 9/21/22 getting 366 on a feed that "WAS" working well.

 

@Richard

I know that you were not trying to suggest one should use the same numerical ids for different request types. I just wanted to add another reason why one should not do it.

Looks like we "late comers" have a little advantage since the TWS API source code contains more useful bits than what you had to work with in the early years.

@Gordon

We are meandering away from the original topic, but a couple thoughts on why you get "117" or something similar for nextValidId.

TWS/IBGW actually memorizes the highest orderId for each clientId and each account. I did not know this either for the longest time, but I think it was JG who made a comment related to this in a post a couple months ago. So I went through the logs and looked for the oldest clientId I could find for a client that placed orders (that clientId had not been used in at least three years), used that clientId to connect to the account, and for sure, automagically, nextOrderId upon connection returned a number that was one higher than the last order that client had placed three years ago.

So if you are reusing the same clientId and that client occasionally places orders, nextOrderId will be going up slightly over time. I guess the clientId that gave you a nextValidId of "117" has placed 115 or so orders over time.

You can break that cycle and reset all orderIds for all clientIds for an account back to, I believe, "1" with the "Reset API order ID sequence" button at the bottom of "Global Configuration -> Configuration -> API -> Settings".

We also never call reqIds while a client is running. The nextValidId we receive as part of the connection protocol is sufficient for a sequence of (thread safe) incrementing orderIds.

We recently added a little code that potentially nudges the clients' internal nextValidOrderId counter while they are running. That code assures that the nextValidOrderId counter is always higher than any orderId the client is exposed to by openOrder and orderStatus callbacks. This is most important for the master client and client 0 but we just added it to the framework for all clients based on this comment in :

"However if there are multiple client applications connected to one account, it is necessary to use an order ID with new orders which is greater than all previous order IDs returned to the client application in openOrder or orderStatus callbacks."

´³¨¹°ù²µ±ð²Ô

On Thu, Sep 28, 2023 at 03:00 PM, Gordon Eldest wrote:

I never got a first nextValidID? < 100.?
Got things like "117", why ? never look at the why, I don't know of any zombie process and it works, so ...


Erratic results from live data requests

 

I request live data with generic ticks set to RT Trade Volume (#375) which returns a trade volume string in addition to the regular default tick data (bid, ask, size etc).? ?Lately I've been getting erratic results where all the default tick data is missing and only the trade volume string is returned.? Sometimes the request works correctly but other times the exact same request will be missing the default data.? The data is missing from the API logs too.

I've tried restarting the Gateway when this happens but I'm not sure that's helping.??

Anyone have any suggestions?


Re: did something change on 9/21/22 getting 366 on a feed that "WAS" working well.

 

I never got a first nextValidID? < 100.?
Got things like "117", why ? never look at the why, I don't know of any zombie process and it works, so ...

Your absolutely right, rules are more relax, and I am a bad guy, I don't abide all rules that I may recommend. However I was burned being too relax.

You have more experience than I on handling IB and have the practical knowledge of the limits in use. As your approach work and fulfill your need, and your right, classifying request is not a must.
To be a better guy ... hereafter sharing what I do. (IMHO, just a use case) not the right thread for that but the djinn is out of the bottle

1- I use a single central requestId 'pump', thread safe,
and another one, separate, for OrderId.
I used to use the same pump for both, but it's not pretty. (very subjective criteria);

2- I always track nextValidId and use it as seed for it.
Taking the highest of next one or IB suggestion.
Handling of nextValidID must be within the protected critical section of the TS pump.
But it's very rare, happens only on connect.

3- I don't call reqIds , not that it's bad, just that I am not ready to wait for the answer.

4- I prefer "overpumping" (wasting ids) than taking the risk to re-use an existing id, even if serviced (even if it may work, I feel it's like playing with fire)

5- I try to keep continuity in OrderId, (I do in fact, but the code is ready for any discontinuity)
But I don't care at all about continuity on 'request'.
Assuming that the higher is the reqid the latest the request was built, was enough for diag.

The side effect is that OrderId grow much slower than reqid, so rapidly you can diagnose what you are looking at just by the range of the number.
@Jurgen, de facto it does something like what you suggest with +100000, (which make more sense, using a fixed value, I agree, as its very unlikely you ever do 100000 orders without a reboot)
However again human-wise: it's difficult to read a huge number.
Most of the time, tests are conducted at beginning of code start, hence small numbers which are easier to memorize or even simply read.

6- I did try to make it bit field on LSB,
In fact in was not so handy as sorting pending/send/received request id manually during debug can become cumbersome.
It help tracking the class of request but you loose the ease to track the sequence.

I dislike ranges, because I always wonder if or when they may overlap. (or require code to handle it as an exception) Not enough experiences with that, probably too dogmatic

This is just a use case.