¿ªÔÆÌåÓý

ctrl + shift + ? for shortcuts
© 2025 ¿ªÔÆÌåÓý
Date

Re: Implied Volatility of an EXPIRY

 

¿ªÔÆÌåÓý

You can get IVs directly from the API and calculate any HV you want from that.?
Here is how I calculate various HVs, eg for S&P500 (I am using ib_insync so some calls may look different than the native API calls):
  • define the contract as the spx index: spx = Index(symbol="SPX", exchange="CBOE")
  • request from the API the sequence of IV data that is calculated off the options on that contract, eg: bars = reqHistoricalData(contract,endDateTime="",durationStr="2 Y", barSizeSetting="1 day", whatToShow="OPTION_IMPLIED_VOLATILITY", useRTH=True, formatDate=1, keepUpToDate=False,chartOptions=[])
  • Now you have bars over the past 2 years with 1 day resolution, and you can calculate any HV you want eg by putting it in a Pandas dataframe, eg for 10-day HV:
  • ivDF=pd.DataFrame(bars)?
  • ivDF=ivDF.drop(columns=[¡°Date¡±]) # can¡¯t apply ¡®rolling¡¯ over a string
  • hv10DF=ivDF.rolling(10).mean().trail(timeframeInTradingDays)

How IB calculates the option-implied-volatility is not clear to me but in any case the various HVs I calculate correspond well to what is shown in TWS.





On Oct 3, 2023 at 1:26?AM -0700, ebtrader via groups.io <jsiddique@...>, wrote:

I would save options prices each week before they expire as well as the underlying prices.? That way you should be able to get a historical IV over time using Black Scholes to back into to the IV.? If you need help with the code, feel free to email me and I am happy to help with that.?

Ebtrader

On Mon, Oct 2, 2023 at 9:07 AM Michael Sutton <mikesutton@...> wrote:
Resurrecting an old thread as I have a similar question.

I think I've convinced myself that the way the "IV of a specific chain" is calculated is the same way that the VIX is calculated as illustrated in this white paper:??There is no Black-Scholles of any sort going on with this calculation.?

That means that from the API, it's possible to calculate the _current_ IV of a specific expiry; but not the _history_ of the IV that's shown in Volatility Lab. Has anyone figured out a way to do that?? (I suspect the answer is it's not possible.? If that's the case, are people using IVolatility.com, which looks like there is no longer any free component.? What about ?)

My other question is what exactly is returned when you request historical volatility from the API?? My assumption is that it is something similar to the VIX calculation where the two expiries closet to 30 days are interpolated to that 30 day mark, although I haven't tried to do a sample calculation to confirm.



Re: How to estimate order execution impact on excess liquidity?

 

I get the margin delta with "Check Margin" in tws or??IBApi.Order.WhatIf. Also,?using a limit order I can get?cash balance delta. My question was if the formula for?excess_liquidity_delta was correct.


Re: How to estimate order execution impact on excess liquidity?

 

Does the "Check Margin" feature in the TWS Order Entry form give you the information you are looking for?

In that case, take a look at the documentation for via TWS API and the flag of the class.

´³¨¹°ù²µ±ð²Ô


On Mon, Oct 2, 2023 at 05:43 PM, Lipp F. wrote:
Assuming that an order execution has $cash_balance_delta?impact on cash balance and $margin_delta?on margin, is it correct to assume that the impact on?excess liquidity is
$excess_liquidity_delta =? $cash_balance_delta - $margin_delta?
Is this accurate? Is there any way to have IB estimate?excess liquidity prior to the order execution instead? TIA.


Re: Implied Volatility of an EXPIRY

 

I would save options prices each week before they expire as well as the underlying prices.? That way you should be able to get a historical IV over time using Black Scholes to back into to the IV.? If you need help with the code, feel free to email me and I am happy to help with that.?

Ebtrader

On Mon, Oct 2, 2023 at 9:07 AM Michael Sutton <mikesutton@...> wrote:
Resurrecting an old thread as I have a similar question.

I think I've convinced myself that the way the "IV of a specific chain" is calculated is the same way that the VIX is calculated as illustrated in this white paper:??There is no Black-Scholles of any sort going on with this calculation.?

That means that from the API, it's possible to calculate the _current_ IV of a specific expiry; but not the _history_ of the IV that's shown in Volatility Lab. Has anyone figured out a way to do that?? (I suspect the answer is it's not possible.? If that's the case, are people using IVolatility.com, which looks like there is no longer any free component.? What about ?)

My other question is what exactly is returned when you request historical volatility from the API?? My assumption is that it is something similar to the VIX calculation where the two expiries closet to 30 days are interpolated to that 30 day mark, although I haven't tried to do a sample calculation to confirm.


Re: C++ preventing EReader reading when socket is closed

 

Hey, if you feel that strongly about it and have the time then go ahead and make the pull request. In the worst case scenario it'll be good practice, a learning experience and software development lesson.

?°À³å(¥Ä)³å/?


Re: C++ preventing EReader reading when socket is closed

 

@Gordon, @Buddy,

"mountain"?? : hardly. There is no impact to existing code with this proposed change (tested). I am simply adding functionality that allows users to manually stop the thread that they started manually.? In fact I can leave the destructor of EReader the same as it was without swapping anything and just create a new function "stop()" which will allow users to disconnect cleanly if they desire to,

(Note to Gordon on your concern about the initial use of EReader during the connection, this is what the "if (m_hReadThread)" statement takes care of in the original TWS API code which is still there in my proposed change.)

Let me remind you that the current situation results in a 509 error, which is basically the API's way of saying something bad happened and I do not know what it is. Your code must be bad, which is not the case here. It is the API code that is bad. Now that we know what causes it, we can probably live with it and safely ignore it, but personally I don't like that sort of situation. If the developers intended to use this way of disconnecting the socket, then they should have properly taken care of the error message it creates. The fact that they didn't means that what they coded was unintentional and therefore is clearly a bug. A bug needs to be fixed.

However, I realise changing (adding to) the interface will make the C++ interface different than the other languages which is probably not desirable.

The only way to correct this and make the existing interface the same is to create an EMutex variable which is shared by both EReader and EClientSocket (suggest using a composition pattern here). Changes to the m_isAlive variable need to be locked by the mutex as well as inside the "while(m_isAlive)" loop. This is a larger code change and needs more care to be done right but it is not that difficult and I have implemented it successfully as well. In fact using EMutex here does not slow the loop down (checking the mutex is as simple as reading a flag) since the only time it is ever locked is when we are disconnecting which means speed is not a "critical trading" issue.

I agree it is good to have this thread capture this discussion for the record but I will definitely raise this problem to the coders as I still firmly believe this to be a bug. I do not wish to have to fix this myself every time there is a new API update.


Re: C++ preventing EReader reading when socket is closed

 

Yeah, I see most of this as making a mountain out of a molehill. It strikes me, largely, as a misunderstanding; much like issuing a SIGKILL but expecting SIGTERM.

If you are familiar with unix signal processing you know that SIGKILL is immediately forced upon the process by the OS and the process my be left in an untidy state. SIGTERM, on the other hand, gives the process an opportunity to clean up and exit gracefully.

Since the API code is readily available, folks can implement their own version of EReader which suits their preference in this regard. Therefore the distinction becomes moot.

There may be some chance of acceptance for a minor pull request which, e.g., swaps the calls to the thread join and socket disconnect. But, even this could be seen as a dog chasing it's own tail since the argument would merely come down to the dislike of a default.

Moreover though, when you consider that the code for EReader hasn't changed in years... even a small change becomes unlikely. And, the suggestion of anything "major", like an intermediate "stop" method, becomes far fetched IMHO.

It's certainly an interesting nuance but probably better addressed by documentation alone. At least we have this thread of conversation now for reference :-)


How to estimate order execution impact on excess liquidity?

 

Assuming that an order execution has $cash_balance_delta?impact on cash balance and $margin_delta?on margin, is it correct to assume that the impact on?excess liquidity is
$excess_liquidity_delta =? $cash_balance_delta - $margin_delta?
Is this accurate? Is there any way to have IB estimate?excess liquidity prior to the order execution instead? TIA.


Re: C++ preventing EReader reading when socket is closed

 

I like evolution, so I may be wrong but here I am not sure I would flag this as a bug.
Just out of curiosity what undesirable side effect the current implementation generate? ?

Some food for thoughts:
Theses are tricky parts as it require tracking all threads together to do proper analysis. again I won't pretend I know all the implications.
I use it in C++/Win32 and it work really fine and fast (async) without any issues that worth a change from my perspective (IMHO).

Because:
1- If I ask a "disconnect" it's to be acted upon immediately I don't want any new data to be processed.
Exiting gracefully and fast a mutithread application can become a tricky process;
I am ready to live with recv receiving remnant of previous telco. It's a philosophical/aesthetic issue I agree,
in an ideal world, we are 100% sure that the last transaction was completed. yes. but ... we are not (at least I am not ) in this ideal world.

Also the "error" system of IB have to be seen as a client messaging service Same for WIN32 where GetLastError() is sometime used as a signal (Example when looking for enumeration of files, this is the only way to get a "no more file" info.)
The OS handle gracefully zombie packet on close socket, so no harm. (surely not the only apps that do that).
Codes are used to that, just look at handleSocketError you see a hard errno = 0; typical of way to reset it without concerns for unhandled error.

2- Related to #1 above, Putting the WaitForSingleObject(m_hReadThread, INFINITE); before eDisconnect may execute a last putMessageToQueue.
While the basic supplied code will do a hard beak, the while() loop with m_buf.size==0 , exit because m_isAlive==false, without executing anything else.

3- Notice that a EReader object is used during connection as a limited life (scoped) heap object to fetch server protocol revision in a synchronous mode, and to call startApi()
(I am not aware of what this proc? does but by the name of it I would avoid missing this call) this looks like a "trick" to overcome an un-anticipated evolution of the protocol,
See EClientSocket::eConnectImpl(....)
Actions happening in the dctor are then to be taken carefully.

I would agree that the inversion of code lines you suggested might not affect the behavior at it's synchronous, (m_hReadThread=0) , that you ought to get the protocol revision and I don't even see how you can abort this single message operation anyway. (aside of a socket shutdown) never see a need.

As buddy point it out:
- it show that playing with the dctor of EReader may have side effect.
In particular there are legacy code without a call to a ->stop(), ?? it's implicit in ~dctor and somewhere does the job.
- Just to mention, the m_signal used for this Connect EReader is shared by this ephemeral EReader, you most probably will use the same for your main EReader.? It need care, there are people who use more than one EReader on same m_signal


Re: Implied Volatility of an EXPIRY

 

Resurrecting an old thread as I have a similar question.

I think I've convinced myself that the way the "IV of a specific chain" is calculated is the same way that the VIX is calculated as illustrated in this white paper:??There is no Black-Scholles of any sort going on with this calculation.?

That means that from the API, it's possible to calculate the _current_ IV of a specific expiry; but not the _history_ of the IV that's shown in Volatility Lab. Has anyone figured out a way to do that?? (I suspect the answer is it's not possible.? If that's the case, are people using IVolatility.com, which looks like there is no longer any free component.? What about getvolatility.com?)

My other question is what exactly is returned when you request historical volatility from the API?? My assumption is that it is something similar to the VIX calculation where the two expiries closet to 30 days are interpolated to that 30 day mark, although I haven't tried to do a sample calculation to confirm.


Re: APIPending status?

Colin Beveridge
 

Thanks, ´³¨¹°ù²µ±ð²Ô, being less of a doofus with the contract definition seems to have helped.
?
(I had seen that documentation -- I stand by my opinion that it could be better documented; I think that "Uncommonly received" is a less helpful message than, say, "if this occurs repeatedly, check you are sending a properly-defined contract". In any case, I hope future people hitting the same problem[^0] will find this thread and find it helpful.)

[^0]: Even if it's just me, I'm ok with that.


Re: C++ preventing EReader reading when socket is closed

 

... I should add with the proposed code change, you can now stop the reader and disconnect the socket without actually destroying the EReader object by doing the following:

'

The last line here is where I wait for my? own message processing thread to end.

This is now very clean and you can connect and disconnect from TWS without error and without having to destroy the EReader.


Re: C++ preventing EReader reading when socket is closed

 

What makes me feel uneasy is that according to the documentation:



It makes users think that by calling eDisconnect() you can safely disconnect from TWS without causing errors but we have shown that this is not the case. You must "stop" the thread, then disconnect from the socket, in that order.

It is not possible for eDisconnect() to do that without creating a mutex locking system that shares mutexes between both EClientSocket and EReader. I agree with you that adding this complexity is not the right approach, for one as it will slow down the message receiving loop.

What I am opting for and will propose in the GitHub when I get the approval to create a branch, is creating an EReader::stop() function like this:



then modifying the destructor to the following:



Then I will also propose that the C++ documentation for the eDisconnect() function be changed to say that to disconnect safely you must call EReader::stop() before calling EClientSocket::eDisconnect().




Re: C++ preventing EReader reading when socket is closed

 

On Sun, Oct 1, 2023 at 03:15 PM, David Armour wrote:

I do not feel that destroying the EReader object is the correct way to disconnect.

IDK if this is what made you uneasy or what you meant, but maybe you'd prefer to see the call to eDisconnect happen in readToQueue as the penultimate action (i.e. before the final signal is issued)? In retrospect this makes a bit more sense to me.


Re: ReqHistoricalTicks() & Futures Data Limitations, Rolling Expiring Contracts

 

My tool is extremely similar to yours, so no bottlenecks in code either.

On Sun, Oct 1, 2023 at 8:07 PM Brendan Lydon <blydon12@...> wrote:
I am doing this from my sim account. I should probably switch to my live account to hopefully get better times?

On Sun, Oct 1, 2023 at 7:04 PM ´³¨¹°ù²µ±ð²Ô Reinold via <TwsApiOnGroupsIo=[email protected]> wrote:
Just after 15:10 US/Central this afternoon, I request Historical TickByTickLast data for Friday's NQZ3 session (20230928 17:00 through 20230929 16:00 US/Central):
  • I received 451,009 TickByTickLast objects for NQZ3
  • It took 445 requests/responses and 1,045 seconds to receive all data
  • that is an average of 2.347s per request (elapsed time)
  • but the median was sub-second at 0.659s.
We don't download historical data a lot so we did not put a lot of thought into the little tool:
  • it is a single threaded event processor (no sleeps, delays, or built-in pacing)
  • requests data in 1,000 tick chunks in reverse time order
  • converts each returned TickByTickLast into a relatively expensive immutable Java object
  • accumulates the objects in a list that is serialized into streamable Json objects and written to file each time more than 10,000 ticks have been accumulated

That means the tool makes about 9 out of 10 requests immediately (a few micro seconds) after the callback for the previous request. There is a short processing delay before every tenth request (5ms to 40ms) for the data serialization and file storage.

Now, the interesting finding is the discrepancy between average and median response times. IBKR paced the responses within chunks of ~60 seconds, each time with roughly the following rhythm:

  • Ten requests with response times around 600ms each
  • One request with 3.5 seconds
  • Three requests with 600ms each
  • One request with 4.5 seconds
  • One request with 600ms
  • One request with 5.5 seconds
  • Three requests with 6 seconds
  • One request with 12 seconds

Attached a couple charts of what that looked like. I do not have data on how long they keep that 60 second chunk rhythm up in case you download a couple years worth of data. My gut tells me that you will not be able to keep up the less-than 3 second average for very long runs.

´³¨¹°ù²µ±ð²Ô




On Sun, Oct 1, 2023 at 01:39 PM, <blydon12@...> wrote:
Running a script right now to get 2 years of tick data for NQ. Seems to be restricting my request limits to every 6 seconds. Are there times when this could improve? It is Sunday @ 2:30 p.m. where I am right now for reference.


Re: ReqHistoricalTicks() & Futures Data Limitations, Rolling Expiring Contracts

 

I am doing this from my sim account. I should probably switch to my live account to hopefully get better times?

On Sun, Oct 1, 2023 at 7:04 PM ´³¨¹°ù²µ±ð²Ô Reinold via <TwsApiOnGroupsIo=[email protected]> wrote:
Just after 15:10 US/Central this afternoon, I request Historical TickByTickLast data for Friday's NQZ3 session (20230928 17:00 through 20230929 16:00 US/Central):
  • I received 451,009 TickByTickLast objects for NQZ3
  • It took 445 requests/responses and 1,045 seconds to receive all data
  • that is an average of 2.347s per request (elapsed time)
  • but the median was sub-second at 0.659s.
We don't download historical data a lot so we did not put a lot of thought into the little tool:
  • it is a single threaded event processor (no sleeps, delays, or built-in pacing)
  • requests data in 1,000 tick chunks in reverse time order
  • converts each returned TickByTickLast into a relatively expensive immutable Java object
  • accumulates the objects in a list that is serialized into streamable Json objects and written to file each time more than 10,000 ticks have been accumulated

That means the tool makes about 9 out of 10 requests immediately (a few micro seconds) after the callback for the previous request. There is a short processing delay before every tenth request (5ms to 40ms) for the data serialization and file storage.

Now, the interesting finding is the discrepancy between average and median response times. IBKR paced the responses within chunks of ~60 seconds, each time with roughly the following rhythm:

  • Ten requests with response times around 600ms each
  • One request with 3.5 seconds
  • Three requests with 600ms each
  • One request with 4.5 seconds
  • One request with 600ms
  • One request with 5.5 seconds
  • Three requests with 6 seconds
  • One request with 12 seconds

Attached a couple charts of what that looked like. I do not have data on how long they keep that 60 second chunk rhythm up in case you download a couple years worth of data. My gut tells me that you will not be able to keep up the less-than 3 second average for very long runs.

´³¨¹°ù²µ±ð²Ô




On Sun, Oct 1, 2023 at 01:39 PM, <blydon12@...> wrote:
Running a script right now to get 2 years of tick data for NQ. Seems to be restricting my request limits to every 6 seconds. Are there times when this could improve? It is Sunday @ 2:30 p.m. where I am right now for reference.


Re: ReqHistoricalTicks() & Futures Data Limitations, Rolling Expiring Contracts

 

Just after 15:10 US/Central this afternoon, I request Historical TickByTickLast data for Friday's NQZ3 session (20230928 17:00 through 20230929 16:00 US/Central):
  • I received 451,009 TickByTickLast objects for NQZ3
  • It took 445 requests/responses and 1,045 seconds to receive all data
  • that is an average of 2.347s per request (elapsed time)
  • but the median was sub-second at 0.659s.
We don't download historical data a lot so we did not put a lot of thought into the little tool:
  • it is a single threaded event processor (no sleeps, delays, or built-in pacing)
  • requests data in 1,000 tick chunks in reverse time order
  • converts each returned TickByTickLast into a relatively expensive immutable Java object
  • accumulates the objects in a list that is serialized into streamable Json objects and written to file each time more than 10,000 ticks have been accumulated

That means the tool makes about 9 out of 10 requests immediately (a few micro seconds) after the callback for the previous request. There is a short processing delay before every tenth request (5ms to 40ms) for the data serialization and file storage.

Now, the interesting finding is the discrepancy between average and median response times. IBKR paced the responses within chunks of ~60 seconds, each time with roughly the following rhythm:

  • Ten requests with response times around 600ms each
  • One request with 3.5 seconds
  • Three requests with 600ms each
  • One request with 4.5 seconds
  • One request with 600ms
  • One request with 5.5 seconds
  • Three requests with 6 seconds
  • One request with 12 seconds

Attached a couple charts of what that looked like. I do not have data on how long they keep that 60 second chunk rhythm up in case you download a couple years worth of data. My gut tells me that you will not be able to keep up the less-than 3 second average for very long runs.

´³¨¹°ù²µ±ð²Ô




On Sun, Oct 1, 2023 at 01:39 PM, <blydon12@...> wrote:
Running a script right now to get 2 years of tick data for NQ. Seems to be restricting my request limits to every 6 seconds. Are there times when this could improve? It is Sunday @ 2:30 p.m. where I am right now for reference.


Re: APIPending status?

 

First of all, order status ApiPending is not an error condition. It is the state of your order(s) right after you placed them. And the section of the does provide high-level descriptions of when to expect the various order stats. They say for :

ApiPending - Indicates order has not yet been sent to IB server, for instance if there is a delay in receiving the security definition. Uncommonly received.

I'd review the contract definition and you should seriously think about requesting a well configured contract from IBKR via instead of initializing various fields yourselves. For example, I am not sure why you set a "strike" for am STK instrument, and initialize trading class, symbol, and local symbol will have important fields initialized with meaningful values.

And then you use a Time in Force of IOT. Does the behavior change if you place the order with a simpler TIF of, say GTC or DAY?

´³¨¹°ù²µ±ð²Ô

On Sun, Oct 1, 2023 at 03:22 PM, Colin Beveridge wrote:
I'm now having a seemingly identical problem using IB's python client -- I can only imagine I'm doing something boneheaded, but without documentation on the error, I'm a bit stuck.

Running on v177, I set up an order like so:

??????? con = Contract()
??????? con.conId = 76792991
??????? con.symbol = "TSLA"
??????? con.secType = "STK"
??????? con.strike = 0.
??????? con.exchange = "SMART"
??????? con.primaryExchange = "NASDAQ"
??????? con.currency = "USD"
??????? con.tradingClass = "NMS"
??????? con.localSymbol = "TSLA"

??????? order = Order()
??????? order.action = "BUY"
??????? order.totalQuantity = 1.
??????? order.orderType = "MKT"
??????? order.tif = "IOC"
??????? order.account = "[account name]"
??????? app.placeOrder([order id], con, order)

This sends:

3-[order id]-76792991-TSLA-STK--0.0---SMART-NASDAQ-USD-TSLA-NMS---BUY-1.0-MKT---IOC--[account name]--0--1-0-0-0-0-0-0-0--0-------0---1-0---0---0-0--0------0-----0-----------0---0-0---0--0-0-0-0--1.7976931348623157e+308-1.7976931348623157e+308-1.7976931348623157e+308-1.7976931348623157e+308-1.7976931348623157e+308-0----1.7976931348623157e+308-----0-0-0--2147483647-2147483647-0---

... but a call to openOrders gives

OrderStatus. Id: [order id] Status: ApiPending Filled: 0 Remaining: 1 AvgFillPrice: 0 PermId: 0 ParentId: 0 LastFillPrice: 0 ClientId: 3 WhyHeld:? MktCapPrice: 0

As I say, I imagine I'm setting something up incorrectly. If anyone can point me towards what (or even where to look), I'd be super grateful.


Re: APIPending status?

Colin Beveridge
 

I'm now having a seemingly identical problem using IB's python client -- I can only imagine I'm doing something boneheaded, but without documentation on the error, I'm a bit stuck.

Running on v177, I set up an order like so:

??????? con = Contract()
??????? con.conId = 76792991
??????? con.symbol = "TSLA"
??????? con.secType = "STK"
??????? con.strike = 0.
??????? con.exchange = "SMART"
??????? con.primaryExchange = "NASDAQ"
??????? con.currency = "USD"
??????? con.tradingClass = "NMS"
??????? con.localSymbol = "TSLA"

??????? order = Order()
??????? order.action = "BUY"
??????? order.totalQuantity = 1.
??????? order.orderType = "MKT"
??????? order.tif = "IOC"
??????? order.account = "[account name]"
??????? app.placeOrder([order id], con, order)

This sends:

3-[order id]-76792991-TSLA-STK--0.0---SMART-NASDAQ-USD-TSLA-NMS---BUY-1.0-MKT---IOC--[account name]--0--1-0-0-0-0-0-0-0--0-------0---1-0---0---0-0--0------0-----0-----------0---0-0---0--0-0-0-0--1.7976931348623157e+308-1.7976931348623157e+308-1.7976931348623157e+308-1.7976931348623157e+308-1.7976931348623157e+308-0----1.7976931348623157e+308-----0-0-0--2147483647-2147483647-0---

... but a call to openOrders gives

OrderStatus. Id: [order id] Status: ApiPending Filled: 0 Remaining: 1 AvgFillPrice: 0 PermId: 0 ParentId: 0 LastFillPrice: 0 ClientId: 3 WhyHeld:? MktCapPrice: 0

As I say, I imagine I'm setting something up incorrectly. If anyone can point me towards what (or even where to look), I'd be super grateful.


Re: ReqHistoricalTicks() & Futures Data Limitations, Rolling Expiring Contracts

 

Running a script right now to get 2 years of tick data for NQ. Seems to be restricting my request limits to every 6 seconds. Are there times when this could improve? It is Sunday @ 2:30 p.m. where I am right now for reference.