Possible causes of `RSIMessageTIMEOUT`?

I have observed in some instances that I’ll get what appear to be timeouts when I write to SDOs, possibly if I write to them too fast (though this is not well defined).

{RSI Error} ErrorNumber(10)(RSIMessageTIMEOUT)
Line(611)
Function(RSI::RapidCode::Impl::RapidCodeNetworkNode::ServiceChannelWriteCore)
Warning?(No)
Text(Timed out trying to write Service Channel (SDO) (Error 10) (RSI::RapidCode::Impl::RapidCodeNetworkNode::ServiceChannelWriteCore) (Object 3) (File ..\..\source\rapidcodenetworknode.cpp) (Line 611) (Version 8.1.8 for 04.04.02.RMP))
ShortText(Timed out trying to write Service Channel (SDO))

What kind of things could be possible causes?

In this specific instance, we were sending vel/accel profile parameters to a servo spindle, though I’m not confident that it matters.

@todd_mm

Basically the Slave didn’t respond to the request in a timely manner. There could be any number of reasons for this on the slave end which are varied and not well defined.

We have a default timeout of 100 milliseconds after which we stop waiting for a response and return that error. There are overloads available so you can use a custom timeout. That might be useful if you know the information requested takes a long time for the drive to provide it has an unusually low priority for responding to mailbox requests.

Some Guidelines:

  • If you are quickly calling the same Service Channel Get or Set, you should move it to an exchanged PDO entry.
  • Multiple threads calling these functions will commonly see the error RSI_NETWORK_FIRMWARE_IS_BUSY so we recommend a queue manager of your SDO access.
  • Wait a millisecond between each of your SDO calls. Our timeout loop evaluates every millisecond but you could run into trouble with sample rates faster than 1kHz. Effectively replacing the first return you wanted with later requests.

I think we should help you by getting that regularly exchanged SDO value (vel/accel profile) into the PDOs.

1 Like

My concern here is that there is a limit to how much data we can transfer via PDO, right? If I choose to implement these (3-5) parameters as PDOs, and later I find that some other feature needs PDOs more and there’s not enough space for both, I’ll have to alter this behavior back to SDOs.

What kind of practical limits are there on the quantity of items that can be exchanged via PDO?

How likely is it that, if we expand to support more drives, that a drive mfr won’t allow us to add new objects to PDOs? Is this a realistic problem, or just a theoretical one, because most mfrs will be able to do this kind of thing without a problem?

There are a few limits you could hit.

  • The size of an EtherCAT Datagram.
  • Some nodes limit the PDO entry count of a given PDO.
  • Some nodes only have fixed PDOs which aren’t modifiable.
  • The limit of PDOs available and not mutually exclusive. (Sometimes it is just 1.)

The number of nodes makes a large impact on the size available for each node. You are basically limited to the size of an Ethernet packet with some small byte count reductions for a half dozen things. (Ethernet Header, EtherCAT Header, Datagram Headers, etc).

I have seen several EtherCAT nodes with completely fixed/exclusive PDOs. We can work with drive vendors to get more options/flexibility as we encounter them though. A serious company should be able to make the appropriate adjustments as the EtherCAT drive standard is designed for it.

1 Like