¿ªÔÆÌåÓý

ctrl + shift + ? for shortcuts
© 2025 Groups.io
Date

questions on bracket and hege orders

 

Three questions:

1. Suppose I send a bracket order to buy. Once the parent buy order executes, two sell orders go out. One is at a higher price and is? a limit order, and the other is a stop order that gets activated at a lower price. After the main order fills, but before the two sell orders fill, is it possible to cancel the two sell orders? Or will I get an error because all three are a package deal?

2. What is the difference between bracket orders and hedging orders? The conditional orders execute only after the parent order is filled, that is the same. Also, both make use of the .transmit flag. However, the bracket order examples seem to have more legs, and when non parent id orders are filled, that fill cancels the others. Is that the only difference?

3. Can I construct "bracket/hedge" orders that are more general and have more legs? Will the behavior always be a.) child orders are only sent after the parent order goes out, and b.) only one child order can fill?


Re: Order Id in error messages

 

On Mon, Oct 30, 2023 at 09:35 AM, ´³¨¹°ù²µ±ð²Ô Reinold wrote:
Well, your characterization is not correct. The id parameter for callbacks always identifies the message the error relates to:
  • If the error relates to a request, id will have the numerical value of that requestId
  • If the error relates to an order, id will have the numerical value of the orderId

This is what I would expect, but the point is exactly it does not work this way.
For example, if you place an order, and make a reqPnLSingle request with your preferable reqId immediately after that, but before you got an order error, the next error message will come with that reqId, even if it relates to the order.
Indeed, this is confirmed by the documentation:

EWrapper.error(

reqId:?int. The request identifier corresponding to the most recent reqId that maintained the error stream.
This does not pertain to the orderId from placeOrder, but whatever the most recent requestId is.

(source:?)

So it's the most recent reqId, and not necessarily the one relating to the order.
You can easily recreate the condition and check.

One of the workarounds that comes to mind is whenever you need to make a request before handling your next error message pertaining to an order, the request must be made with an orderId in place of a reqId in that request. Or it can be any offset from an orderId, as you suggested, but in any case one needs to expect that reqId in the error callback, which does not necessarily equals the orderId.


Re: IB Gateway crash

 

Not sure I understand how an IGB crash may impact your keyboard.Also I don't see how IBG may impact the OS so deep that you need a Hard reset.

This look more an issue at OS level. If you use a remote desktop or a VNC like KVM that also could be a culprit, Some dislike very fast scrolling display like a log burst on a window.


Re: Correlate orders and fills from the API to those in Flex Report

 

That's perfect. Thank you.


IB Gateway crash

 

Today, 2023.10.23 21:12:12 UTC IBG crashed and did not restart. This happened on two machines. One hosts my trading application and a realtime IBG, the other only a paper IBG.
Subsequently, the keyboard was dead and I had to hard reset to recover. Does anyone had a similar experience or has a clue?


Re: Correlate orders and fills from the API to those in Flex Report

 

I never had the need for Flex Reports but recently started a low-priority project to make an XML schema for automatic ingestion of Flex XML data. The test data I had collected for that does show some line-up between TWS API and Flex XML:
  • Field "execId" from the TWS API Execution class does line up with the "ibExecID" field in the Flex XML type "Trade"
  • The TWS API free-form field "orderRef" in the Order class is carried over to the Flex XML type "Trade" field "orderReference"

There may be others, but at least in my test data, "permId" does not line up with "ibOrderID".

You might have to select the Flex Query option "Include Audit Trail Fields" to see those fields in the Flex XML.

´³¨¹°ù²µ±ð²Ô


On Mon, Oct 30, 2023 at 10:19 AM, Little Trader wrote:

What is the most reliable way to correlate orders and fills from the API with the same in the Flex Report? It seems each fill has a unique execId and each order has a permId, but there are no such ids in the flex report.


Re: C++ reqHistoricalData() closing connection

 

From my experience a connection closed trough exception happens when your submit to GTW or TWS a message that was build in EClient.cpp but contains a non valid value.

From that perspective, the area that maybe an issue is what is inside your "Contract" object,
you didn't mention enough about it, in particular what you setup for Type (and Exchange )
More fundamental: NQ "dry" is probably not working , you need to specify the contract date or a sepcific Type or maybe you mean NQX

If in doubt try a ::reqMatchingSymbols? to receive a list of what exist prefixed with NQ that of interest to you, and the appropriate official IB terminology.

Also asking "keepUpdated" when onboarding the API is adding difficulties to your experiments.
The simplest thing to start with is to use the supplied example form IB they should work (Caveat on Order but for another day)? then gradually change the example until you hit on what make it failing then fix it.

Sunday: In general should not matter if looking for Historical between 6 EST and 22 EST. However during login look at extra message, last Saturday IB did a planned maintenance.
IB does updates sometime but at specific hours (very aside of RTH) this depend upon the TZ of the instrument you look for and your registered TZ.
A lot of thread here deal with this and various strategies,but that's really a fine tuning for operation.

?


Correlate orders and fills from the API to those in Flex Report

 

What is the most reliable way to correlate orders and fills from the API with the same in the Flex Report? It seems each fill has a unique execId and each order has a permId, but there are no such ids in the flex report.


Re: Order Id in error messages

 

Well, your characterization is not correct. The id parameter for callbacks always identifies the message the error relates to:
  • If the error relates to a request, id will have the numerical value of that requestId
  • If the error relates to an order, id will have the numerical value of the orderId
  • if the error is not directly related to a message your client sent (such as global errors), id will have a value of -1.

Between requestIds and orderIds, numeric values for orderIds have to follow strict requirements possibly for very long periods of time (as in forever or until you reset the sequence in TWS/IBGW) while requestIds are ephemeral and can be reused as soon as the last request they have been used for is complete.

Therefore, you simple design a numeric assignment strategy for your client that makes sure that, at any point in time, the id value of an error callback can be uniquely related to a recent request or an order. There are may ways you can do that, but a simple approach, that worked well for us for years is this:
  • When your client connects, it receives the numeric value for the next valid orderId through the nextValidId() callback. For clientIds that have never been used before, nextValidId will return a value of 1
  • Memorize that value as the nextOrderId and increment it every time you need to assign an orderId
  • Similarly, create a nextRequestId that you use to assign unique ids for your requests from. Assign the initial value with a large offset from nextValidId(), such as, nextRequestId = nextOrderId + 10_000_000

Your error callback can uniquely be related to requests or order.

´³¨¹°ù²µ±ð²Ô



On Mon, Oct 30, 2023 at 05:19 AM, bespalex wrote:
This one thing has almost blown my mind recently, so I guess the 'discovery' may be helpful for others.
Normally, when you handle errors, you would expect to get an order id the callback, but it actually does not work this way.
You only get the order id as a reqId if the last thing you did was placing an order, but if you requested something else requiring reqId, like reqPnLSingle for example, you would be getting this reqId back in the errors callback.
Yes, I know this is mentioned in the documentation, but still come to think about it, this can really mess up your algo, if you are not careful enough!


Order Id in error messages

 

This one thing has almost blown my mind recently, so I guess the 'discovery' may be helpful for others.
Normally, when you handle errors, you would expect to get an order id the callback, but it actually does not work this way.
You only get the order id as a reqId if the last thing you did was placing an order, but if you requested something else requiring reqId, like reqPnLSingle for example, you would be getting this reqId back in the errors callback.
Yes, I know this is mentioned in the documentation, but still come to think about it, this can really mess up your algo, if you are not careful enough!


Re: Does IB really limit you 1 session at a time with SSO?

 

Oh ok I see. I just verified that you can't have multiple users under your demo account, so I guess only for the demo I have to deal with this 1 session for the entire account problem... I will be sure to create a second user in my live account though. Thanks!


Re: Does IB really limit you 1 session at a time with SSO?

 

The limitation is one session for each user name at any point in time.

But you can create a second user name for your live account through the Client Portal. That second user can have the same or fewer permissions. For example you could disable trading permissions for the second user if you are concerned about accidental mobile "butt trading".

´³¨¹°ù²µ±ð²Ô


On Sun, Oct 29, 2023 at 12:09 PM, Jimin Park wrote:

For me it is impossible to do the following which I can do with any other brokers with APIs.
  • Run your automated trading system
  • Login to your trading account and check the live positions, P&L and account balances

With IB, I have this weird limitation where
  1. I run the Gateway API Java server and login to my live account by going to . Then I start up my trading system starts trading soon after.
  2. I open my mobile app or the IB's website to login to my live account to check the trades and P&L.
  3. My Gateway API Java server loses the logged in session until I manually have to login again at

Is this really IB's limitation? It's impossible for you to run your trading system and login to their client app to check your own account?


Re: updateMktDepthL2 missing data points

 

IDK how much more of this you want to read about. But, I'll just try to bolster the crux of ´³¨¹°ù²µ±ð²Ô's reply by referring you to Wikipedia regarding the .

I'll also add that the "first approach" they describe (re-entrancy, local store, immutable objs) is often the approach people tend to overlook. This may happen because one doesn't need extra tools for the first approach, it's basically a "style". And, I suspect if you show someone a "thing" (mutex, semaphore, etc) they remember it more than if you describe a "way" (don't share data, put everything on the stack, etc).

Well, I'm not an educator so that's all conjecture.

The second approach, otoh, can't be avoided in some cases... often when dealing with physical resources. Anyway, you should choose the approach which suits your given circumstances. Only experience and judgement will help you know which. I'd err toward the first approach if there are any doubts. But again, the first approach doesn't often come naturally until it's practiced a bit... and sometimes it's not worth the extra mental gymnastics.

As usual, YMMV.


Does IB really limit you 1 session at a time with SSO?

 

For me it is impossible to do the following which I can do with any other brokers with APIs.
  • Run your automated trading system
  • Login to your trading account and check the live positions, P&L and account balances

With IB, I have this weird limitation where
  1. I run the Gateway API Java server and login to my live account by going to . Then I start up my trading system starts trading soon after.
  2. I open my mobile app or the IB's website to login to my live account to check the trades and P&L.
  3. My Gateway API Java server loses the logged in session until I manually have to login again at

Is this really IB's limitation? It's impossible for you to run your trading system and login to their client app to check your own account?


C++ reqHistoricalData() closing connection

 

Hi,

Whenever I call reqHistoricalData() the connectionClosed callback is hit in my Client Class. I am trying to get real time bar updates. Below is what I am passing into reqHistoricalData():
1, ->TickerId tickId
NQ futures contract (I have permissions), Contract contract
"", (for real time updates) string endDateTime
"1 D", string durationStr
"1 min", string barSizeSetting
"TRADES", string whatToShow
1, int useRTH
1, int formatDate
true, bool keepUpToDate
TagValueListSPtr(), const TagValueListSPtr& chartOptions

Any tips? Today is Sunday, and the market is closed, if that matters.


Re: updateMktDepthL2 missing data points

 

my design is pretty the same, except much feature-less than what ´³¨¹°ù²µ±ð²Ô and his buddies implemented.

if i should write it as simple as i can, the design is this:
  1. i have my own tws api client implementation:
    • it is heavily multithreaded (as it was cheap for me to implement), with the respect that there is only one "pipe" between my api implementation and ib gw where messages go through.
    • for multithreading i use queues heavily, so that the items are always sorted as they came (at least within that topic queue, like ticks, order messages etc).
    • for incoming messages, i have queues for each topic (so that i keep the order of the messages as they came in) and process them one by one in a separate thread. this is very important, not to change the order of the messages within the topic.
    • for outgoing messages, as there is only one pipe out, i process the messages as i pass them in the message processor, with the only distinction, that i have a priority queue (which is always processed first) and a standard queue (which is processed once the priority queue is empty).
    • for each request type (with some exceptions, like order requests and messages are handled in single class as they all originate from placing an order) i have single class which implements preparation and sending of the request, processing of responses and incoming events, and handling of corresponding errors.
    • i have a single class (called TWSService) which implements various providers from my abstraction library and directs the requests to specific handlers. it also takes care of the connection handshake, monitoring the connection state (and notifying it outside via listeners), disconnecting, and automatic reconnection, if configured so.
    • the requests are passed to this implementation with uuid request id which is a unique id within my abstraction library, and it is mapped to tws api request id.
  2. i have an abstraction library that hides all the specifics of provider apis (like tws api, so that i could change data feed for example without changing anything in my code, just adding new api implementation) and provides all the features in the way that is more suitable for my consumers (apps):
    • i have connectors that manage related stuff for each specific topic (like accounts, contracts, data, orders, positions...).
    • i have provider interfaces that make standard communication interface for communication with api implementations (currently the only implementation i have is my tws api implementation). each connector gets one instance of a provider implementation to communicate with. in other words, the connector encapsulates the provider to build some more features on top of the provider, if needed.
    • some connectors provide some logic for stuff that is not api specific but rather my library specific, so if i would use different api later, the logic would/should work the same for the other api.
    • i heavily use listeners to pass received data to the consumers. this all happens asynchronously, as the request travels to my tws api implementation and to ib gw and from ib tw the messages travel through various queues back to the consumers (using standardized interfaces or objects from my abstraction library).
    • everything that would/could cause blocking (or does not need to be processed in-process) has its own thread (like data serialization, logging, ...) and queues so that the processing thread is as fast as it could be.
    • serializers serialize the data in batches. that is, they read the queue while there are items in it and only once the queue is empty, the transaction is committed (i use databases).
  3. and then i have various consumers (apps):
    • they instantiate TWSService instance with specific parameters (host, port, client id) and pass the instance to a specific connector (which accepts a provider interface, TWSService implements those provider interfaces).
    • then all the communication is done through my abstraction library using the connectors (to send requests) and listeners (to receive messages).
    • the consumer also listens to the connection change interfaces directly on TWSService.
this is pretty much the general concept of my infrastructure.


Re: Placing an order using the Gateway API to the demo account results in 401 Unauthorised

 

Oh ok, nevermind. I logged into the paper trading account in IB website and I can see the submitted orders.


Re: Placing an order using the Gateway API to the demo account results in 401 Unauthorised

 

Hi Jurgen,

Thanks for helping out again. The maintenance was the issue. I tried placing the order this morning, and the order was successfully placed with PreSubmitted status. However, it is now stuck in PreSubmitted and not actually getting submitted. Is this because today is Sunday, and markets are closed? Or am I missing something in my API call?

Response from place order request
[
{
"order_id": "1172085515",
"order_status": "PreSubmitted",
"encrypt_message": "1"
}
]

Response from order status request
{
"sub_type": null,
"request_id": "759",
"server_id": "496",
"order_id": 1172085515,
"conidex": "586139726@CME",
"conid": 586139726,
"symbol": "MES",
"side": "B",
"contract_description_1": "MES DEC23 (5)",
"listing_exchange": "CME",
"option_acct": "c",
"company_name": "Micro E-Mini S&P 500 Stock Price Index",
"size": "5.0",
"total_size": "5.0",
"currency": "USD",
"account": "DU8021006",
"order_type": "MARKET",
"cum_fill": "0.0",
"order_status": "PreSubmitted",
"order_ccp_status": "0",
"order_status_description": "Order Submitted",
"tif": "GTC",
"fg_color": "#FFFFFF",
"bg_color": "#0000CC",
"order_not_editable": false,
"editable_fields":"",
"cannot_cancel_order": false,
"deactivate_order": false,
"sec_type": "FUT",
"available_chart_periods": "#R|1",
"order_description": "Buy 5 Market, GTC",
"order_description_with_contract": "Buy 5 MES DEC'23 Market, GTC",
"alert_active": 1,
"child_order_type": "3",
"order_clearing_account": "DU8021006",
"size_and_fills": "0/5",
"exit_strategy_display_price": "4136.50",
"exit_strategy_chart_description": "Buy 5 Market, GTC",
"exit_strategy_tool_availability": "1",
"allowed_duplicate_opposite": true,
"order_time": "231029050317"
}


Re: Placing an order using the Gateway API to the demo account results in 401 Unauthorised

 

You are now moving past the point that I can be helpful.

Get a "second opinion", as in place an order in the paper account through TWS or the Client Portal to make sure the paper account is ready. Then try the same order through the Client Portal API. Are you sure you have trading permissions for the instrument?

And I would not dismiss intermittent outages over the weekend due to maintenance. It is not often that notes/warnings about maintenance pop up during the TWS login, but this weekend they did. The messages are gone now, so it's worth another try.

´³¨¹°ù²µ±ð²Ô



On Sat, Oct 28, 2023 at 11:11 AM, Jimin Park wrote:
Following is what I did:
  1. Start the local Gateway API Java server
  2. Open ?and login with DEMO account.
  3. Check??to see the session is good.
  4. Check ?and I get the following
?
5. Try to place an order to ?and I get 401 Unauthorized error.

What concerns me is when I look at the iserver's auth status, it says "authenticated": false.
How do I get myself authenticated and authorised to start placing orders to the demo account?
I am currently testing on Saturday over the weekend, and I am not sure if IB having maintenance has anything to do with it. So far, I can get access to the /trsrv endpoints to get contract information. Not sure if the iserver is up right now.

If there is some steps I am missing prior to placing orders to the demo account, please let me know.


Re: updateMktDepthL2 missing data points

 

When I read your initial post, my first thought was "data corruption due to multi-threading" and it looks like that is where the discussion with Gordon Eldest, buddy, and fordfrog is coming to.

Level II data for active instruments can generate a lot of data and, more importantly, bursts of hundreds even thousands of callbacks in peak seconds. Take a look at this post where I shared some stats for our setup. We have days with 2,000,000 callbacks withing five minute windows which is a sustained rate of 6,000 callbacks per second for the entire period.

I am not sure how busy USD.JPY is and whether you subscribe to more than one pair, but it is safe to assume that you will have peak seconds with tens or hundreds of callbacks. On the other side, your code seems to have no synchronization between threads at all. So it is just a matter of time until corruption happens.

You pointed to some code from Ewald's ib_sync . If you look a little closer to the entire library you will find carefully placed synchronization statements that make sure no data corruption takes place and a design that eliminates globally shared state as much as possible.

You will find several posts in the archive about multi-threading, how to do it and that its hard. And there is a lot of truth to the various contributions but let me share some thoughts on architecture and design for (massively) multi-threaded TWS API clients that has worked very well for us and yielded rock stable applications with little or nor data access synchronization. We even have certain applications with 100++ threads that connect to several TWS/IBGW instances simultaneously and process the individual "data firehouses" flawlessly on a multi-processor machine that executes many of them really in parallel.

Granted, we develop in Java and the language has a rich set of concurrency features, but all programming languages (including Python) have add-ons or libraries with similar or identical functionality. The following should, therefore, apply equally to all TWS API implementations (and multi-processing applications in general).

We had the following goals for our applications:
  • Any application can execute code from any of our libraries in as many parallel threads as it is useful.
  • Synchronization can be complex to program correctly, is expensive at runtime, and significantly reduces the effective parallelism for mulit-threaded applications. Therefore, by design, code needs to eliminate the need for locking/semaphores or other synchronization by eliminating all (but at least the vast majority) of globally shared data.
  • The only acceptable synchronization is when a thread runs out of "things to do" and needs to wait (you guessed it - not sleep!) until more data is available

So here is what we do ...

Ruthless separation of concerns, strict need-to-know-only, and only immutable objects

At the architecture level we break classes and modules into the smallest possible pieces so that all functions are related to just one topic/domain/subject. Instead of having an EWrapper object accessible to all functions in the application, we have a thin layer of code that wraps TWS API into a "controller" paradigm and groups the various functions into approx 50 small task oriented interfaces such as: Account, Position, Contract, Order, TickByTick, MarketDepth, ... Also, each interface defines classes for the data returned by TWS API callbacks so that each request returns a single immutable object that carries all parameters from the callback. In Java, these classes cannot be extended and instantiated objects cannot be modified once created. This way, objects can be shared side-effect free with many different modules in many parallel threads.

The controller completely hides the existence and details of TWS API (such as requestIds) so that the application code only deals with high level objects along the lines of the domain interfaces. The signature of request calls does not have a requestId parameter any longer but callers provide a handler object instead that receives all callbacks and errors for just that request. In other words and closer to your problem, even multiple MarketData subscriptions are completely separate from each other in that the application provides unique handler objects for each of them.

The Java implementation of TWS API ships with an ApiController class that shows how this can be implemented. And the good news is that you can develop your controller over time since it only needs to support the requests your applications actually use.

No global state and data streams

The main reason why a multi-threaded application needs locks/semaphores is to prevent data corruption when multiple threads simultaneously modify some global state or data.

In your case, you have a shared global object that is not only modified by the updateMktDepthL2 callback [self.series[reqId].append( ...] and the thread that saves the data [app.series[reqId] = list()] but also by several independent market data streams for different instruments [e.g. indexed by reqId]. If you try to solve your issue with locks and semaphores, that object will become the application's bottleneck since it will be locked all the time effectively reducing parallelism to just about 1.

One way to eliminate the need for synchronization is to eliminate the global series object entirely:
  • Instead of storing the individual parameters from TWS API callbacks in lists with columns, define a class that carries all parameters for the callback and creates immutable objects that can be freely shared by all parts of your application.
  • Separate the data streams for multiple subscriptions/reqIds from each other so that they can operate in parallel and are not even aware of each other
  • Define a "consumer" interface that all classes implement that need to work with the data. In your case that could be
    • a MarketBook class that immediately handles the updates without having to store them in a list and that only remembers the last price and size update for each sell side and buy side slot (as @fordfrog suggested)
    • a FileLogger class that consumes the objects one at a time and saves them to file. This class could simply take advantage of "file output buffering" or keep a private list of a certain number of objects before it writes them to disk. That list is local to the logger and requires no synchronization. The logger class could also hide the fact that the logger actually runs in a separate thread. It would simply create a queue that the consumer interface feeds with data objects and that the logger thread reads and stores to file at its leisure and possibly with a lower priority than the real-time stream consumers.
    • There could be other users of the data that would simply implement the "consumer" interface as well.
    • Since each instance of the data is immutable, one and only one copy of the object exists at any point in time regardless of how may consumers or threads receive a copy.
  • The controller keeps a list of consumers for each callback (such as MarketBook and FileLogger) or you could create a replicator class that receives objects from the API Controller and forwards them to one or more real consumers. We have that, for example, for TickByTick data where the replicator transparently forwards the objects provided by callbacks to several consumer streams:
    • A logger that serializes objects into Json representations and saves them to GZIP compressed files. We have a central generic logger that can handle all object types and that remembers the temporal sequence of all objects from the various streams. That way, we can replay a session in the exact order of events it took place in real time and analyze after the fact, how events for one instrument foreshadowed changes for another instrument. Or we can measure the request/response times since the logger keeps Instant time stamps with nanosecond resolution (thought our clock more realistically has microsecond precision).
    • Several filter, aggregator, and processor streams that calculate and determine interesting details from the data stream. The results are published as streams again so that they can be consumed by multiple modules possibly in multiple threads and have to be calculated only once.
    • The trading logic that takes the raw TickByTickLast and TickByTiclBidAsk data as well as filtered and aggregated information into consideration
    • The Position manager that keeps its own view of P&L and current market value of positions. This is independent from what TWS API provides since our requirements cannot be satisfied with TWS API data that resets P&L values some time in the middle of the night.
    • The order manager so that it is aware of the most recent bid, ask, and trade prices when it is time to place new orders
    • ...
    • This sounds like a lot of code but it actually is not. Proper structure and architecture? reduces the actual amount of code as you also can see within Edward's ib_insync. Applications simply "wire up" the various data streams with predefined or custom modules that comply with the consumer and provider interfaces. Depending on the application needs, data streams can execute within a single thread, use a a fixed pool of threads, or have dedicated threads for some streams without having to worry about locking shared or global data objects.

This became a little longer than initially intended but hopefully gives you some food for thought for a more powerful and scalable way to solve the data corruption and deal with MarketDepth fire hoses.

´³¨¹°ù²µ±ð²Ô


On Sat, Oct 28, 2023 at 02:44 PM, John wrote:
@Buddy, ... However you might be correct that it could come from an unfortunate thread concurrency, i.e. a huge amount of data being unequally added to the tuple/list at the exact time the dataframe is created, so I might simply need to hard copy the list before converting that copy to a dataframe instead of the list itself. I'll try that next week and keep the group posted.