Analog I/O, [Un]Signed, Behavior

So, I’m implementing and testing analog I/O. I Wired together one I/O point on two Beckhoff cards, a 2Ch ±10V EL4032 (output) and a 2Ch ±10V EL3002 (input).

In one test, I walk through various values of the output, verifying the new value, then sampling the input (averaging 5 successive samples) after each one. When I go fast enough, I will frequently see an instance where the test fails because the output value isn’t (quite) what I set it to be.

I started sampling the I/O points in the motion scope, (with an additional 0.5 sec delay between iterations) to see if I could see anything going on.

When I step 4096 counts at a time, from 0 - 65535, This is what I observe (in the scope).

Mostly, good, except for that one spot in the middle.

Delays in Response

If I zoom in on (any) one of the other steps, I see this.

Question #1: Can you hypothesize about why the input gradually changes over 20ms instead of jumping to the “expected” value?

My best guess is that “one does not simply change voltage” immediately, though 20ms seems like a long time to take to do it. Is this just the behavior of analog I/O cards in general? Do you think it’s a function of the output or the input?

Signed/Unsigned ?

The more troubling component of this is in the middle of the sequence. When I zoom in, I see this.

Conspicuously, this happened between setting the output from 28672 and 32768. This seems like software behavior to me. Naturally, 32768 would be represented as 0x8000, which can also be interpreted as -32768 in a 16-bit integer. So, the “wraparound” from 28K to 32K is sort of understandable, though it’s quite undesirable.

Question #2: Are analog inputs/outputs inherently “signed” (by the RMP API’s and underlying implementation, at least)?

The API for setting outputs takes an int32_t (signed). However, “sign” is only a matter of interpretation.

When I send “unsigned” values, I just cast it as a same-size signed value and call the API.

Are the I/O cards “signed” in the way they interpret the physical input state and report a value to the EtherCAT master?

Can you explain this behavior to me?

I attempted to find an answer in the Beckhoff documentation (EL40xxen.pdf and el30xx_en.pdf), but I didn’t find what I was looking for. Chances are, I don’t know the correct terminology to search for.
I looked for “respons”, “delay”, and “elaps” without finding pertinent answers in the output document. In the input document, “respons” did turn up “FIR and IIR filter”, though I’m not 100% certain that this is what I’m looking for.

BTW, in the graphs, “AO…” refers to an analog output. Similarly, “AI…” refers to an analog input.

Now that I’ve thought about it, I suppose a ±10V card will likely behaved like a signed value, and the behavior I observe is the output voltage changing from something very close to +10V down to -10V.

I don’t have a 0-10V analog input card to test with at the moment. Would you suspect that it would behave like a 16-bit unsigned value, or is it more likely to behave like a 15-bit unsigned value, or even a 16-bit signed value that doesn’t allow negative values?

Question 1

I think you should try disabling all the EL3002 filters to see if you get a different result. The FIR (enabled by default) and IIR (disabled by default) could be the culprit.

Question 2

The RMP will just send the raw bytes of the int32_t, so it’s up to you to get the right data in there. It appears that the EL4032 is a +/-10V 12 bit output. On the EtherCAT side, it goes into a 16-bit PDO value. I think you probably just need to use values of 0 (-10V) to 4096 (+10V)?

If you had a 0-10V 16-bit analog output, I would think 0 = 0V and 65536 = 10V.

The descriptions for both of these devices say they are “12 bit”

However, RMP always receives (?) a 16-bit value. When I vary the output between -32768 and +32767, stepping up 4096 at a time, I see the output voltage vary by 1.25 V with every step.

What does it mean when Beckhoff says that it has a (resolution/precision) “12-bit”? Is it only capable of outputting/measuring 4096 unique voltages?

I ran a test, incrementing the values by 1 between -4096 and +4095. What I see is that the corresponding input value jumps by 16, rather than by 1.
FWIW, here’s what I observed.

So, I guess the effective resolution on one or both ends is 12 bit, even though the values that I set in RMP use the entire 16-bit range.

Yes if they are 12-bit, it’s only looking at the least significant 12 bits, whether it’s 0 to 4096 or -2047 to 2048, that’s the most it will do. Can you re-run with those ranges?

I set the analog output to some different values and measured the voltage with a voltmeter.

Analog Output Value Measured Voltage Analog Input Value
-4096 -1.25 V -4078
-2048 -0.619 V -2026
+2047 +0.628 V +2039
+4095 +1.25 V +4094

Ok so they really are expecting you to use +/- 32767 (16 bits) even though it’s a 12-bit DAC. :person_shrugging:

Here are the results of a similar test of values ranging from -(1<<(16-1)) to (1<<(16-1))-1

Analog Output Value Measured Voltage Analog Input Value
-32768 -10 -32749
-28672 -8.76 -28646
-24576 -7.51 -24559
-20480 -6.26 -20459
-16384 -5 -16369
-12288 -3.75 -12263
-8192 -2.5 -8183
-4096 -1.24 -4082
0 0 15
+4096 +1.25 +4093
+8192 +2.5 +8196
+12288 +3.75 +12281
+16384 +5.01 +16383
+20480 +6.26 +20472
+24576 +7.52 +24575
+28672 +8.77 +28661
+32767 +10.02 +32749
1 Like

My guess is that Beckhoff wants to provide a “common” interface for all/most of their cards, and providing 16-bit values with meaningless LSB’s, based on the stated precision of the DAC.