Int16 and Int32

As I am not a programmer, this may sound really stupid. Anyway, I am trying to get a simple ted5000 interface working - just to provide current usage and recent peaks and lows. If I request raw second history from the TED, it comes back in a base64 encoded format. Once I decode a row, I get a binary string that is just a sequence of 9 fields of varying lengths. The first 6 fields are easy to handle - they are each one byte long, and I just use a string.byte command to parse each byte and then string.format to convert to integers. The next three fields are a little more painful. Fields 7 and 8 are 4-bytes long (int32) and field 9 is 2-bytes long (int16). How would I go about formatting/converting these values into integers?

Again, I am not a programmer… so I may not be wording my question properly. An example base64 encoded row is:

CwEMCgk2ngIAAAAAAACSCQ==

Using perl on the CLI, I converted this to:

111121095467002450

[quote=“woodsby, post:1, topic:167342”]As I am not a programmer, this may sound really stupid. Anyway, I am trying to get a simple ted5000 interface working - just to provide current usage and recent peaks and lows. If I request raw second history from the TED, it comes back in a base64 encoded format. Once I decode a row, I get a binary string that is just a sequence of 9 fields of varying lengths. The first 6 fields are easy to handle - they are each one byte long, and I just use a string.byte command to parse each byte and then string.format to convert to integers. The next three fields are a little more painful. Fields 7 and 8 are 4-bytes long (int32) and field 9 is 2-bytes long (int16). How would I go about formatting/converting these values into integers?

Again, I am not a programmer… so I may not be wording my question properly. An example base64 encoded row is:

CwEMCgk2ngIAAAAAAACSCQ==

Using perl on the CLI, I converted this to:

111121095467002450[/quote]

First step I would take would be to print the converted value as hex instead of decimal–that should allow you to strip off the bytes–every pair of hex digits will be a byte. Your 16-bit value will be four hex-digits long, and the 32-bit ones will be eight hex-digits long. You should be able to break the row up, and then you just have to figure out what endian-ness your values are, which should be possible.

–Richard

Thanks @rlmalisz for the response… but I don’t think I follow… when you say print the converted value in hex rather than decimal - where do you mean? I base64 decode the value returned from TED. From there, I just handle the output from the base64 decode. Are you saying I should convert this decoded string to hex, then break it apart as I would any string?

So I base64-decoded this to binary data, and what I got (in hex) was these 16 bytes:

0b 01 0c 0a 09 36 9e 02 00 00 00 00 00 00 92 09

So the first batch is your six individual bytes. The next two should be your int32s. The last one is your int16.

What do you mean by “handle the output from the base64 decode”? What you generally get back from base64 decode is some number of bytes of non-ASCII data. Bytes are 8-bits apiece, so I have broken your value up into the size chunks you were looking for. Data is data, but it’s a lot easier to break binary data up in hex, and then translate the parts to decimal, if you really need to.

–Richard

Thanks Richard. I guess I need to figure out how to convert the binary to hex in Lua. As for handling the output, I was using the string.byte command on the first 6 chars, then formatting it.

No problem. What is it you need to do with the three non-byte values? Check for bits? Check for specific values?

You don’t really have to convert the binary data to hex–that’s just a way of looking at it that keeps from schmooshing the bits from the various bytes together. Base64decode should give you a length and length-bytes of binary data back. I don’t know LUA (yet), but in something akin to C, you’d have code that looked like this:

unsigned char buffer[20]; /* room to spare */
char *encoded = “CwEMCgk2ngIAAAAAAACSCQ==”;
int length;
uint16_t val16;
uint32_t val32a, val32b;

if (! base64decode(encoded,buffer,&length,20))
die;
/* first uint32 is 6 bytes in */
val32a = 0;
#ifdef LITTLE_ENDIAN
val32a = buffer[9];
val32a = val32a * 256 + buffer[8];
val32a = val32a * 256 + buffer[7];
val32a = val32a * 256 + buffer[6];
#else
val32a = buffer[6];
val32a = val32a * 256 + buffer[7];
val32a = val32a * 256 + buffer[8];
val32a = val32a * 256 + buffer[9];
#endif

And so on. Do you have a clue what range the int16 is supposed to be in? If it’s something akin to 32,385, then your numbers are big-endian. If it’s more likely 2,450, then it’s little-endian–that is, the least significant bytes come first.

Hope this helps. I’m a nerd by occupation, and tend to think about this stuff in hex.

–Richard

In addition to figuring out Little Endian vs Big, you will need to know if the value is signed or not. Typically the first bit is used as an indicator.

Here’s some Lua code to mull over:

Returns the 2nd byte of buffer:

my_val = string.sub(buffer, 2, 2)

Converts a binary value in a single byte to a Lua number:

function bin2num (mybyte) return tonumber(string.byte(mybyte)) end

So using the above function, the little endian unsigned code in Lua might look something like this.

val32a = bin2num(string.sub(buffer, 10, 10)) val32a = val32a * 256 + bin2num(string.sub(buffer, 9, 9)) val32a = val32a * 256 + bin2num(string.sub(buffer, 8, 8)) val32a = val32a * 256 + bin2num(string.sub(buffer, 7, 7))

Notice that Lua uses base 1 for its substrings, where C uses base 0.

Thanks guys. All I needed was the math - again, would of learned something like that had I actually attended my intro to comp sci class, or gotten any farther than my first year of college ;).
Anyway, I think I’m good to go.
Guard, FYI, the string.sub can be eliminated, as string.byte can be told which byte(s) to grab. Also, this is a combination of unsigned chars and signed ints per the API, but it doesn’t seem to make any difference here. I grab any of the bytes and get the info needed using string.byte.

And it is little-endian.

Thanks again all.

Alright, thanks to everybody’s help, I have simplified the little-endian parsing into a function: (tonumber wasn’t needed)

function formatInts(decoded,a,b)
local formatReturn = 0
for i = a, b do
formatReturn = formatReturn + ((string.byte(decoded,i))*math.pow(256,(i-a)))
end
return formatReturn
end

It is called just by passing the base64-decoded string, and the start and finish points of the values I’m looking for… for example:

local formatPower, formatCost, formatVoltage = 0, 0, 0
formatPower = formatInts(decodedBase64String, 7, 10)
formatCost = formatInts(decodedBase64String, 11, 14)
formatVoltage = (formatInts(decodedBase64String, 15, 16) / 20)

Piece of cake… thanks again guys.