'How to send an integer greater than 127 from android(server java) using Byte array in java to computer (client c)? [closed]
I want to send an integer value (6000) from android.I have to transfer it using byte[], for which i tried converting int[] to byte[].But [0,0,23,112] is being stored.Could someone help?
Solution 1:[1]
[0,0,23,112] is 6000.
As you said, you must send the data in the shape of a byte array. A single byte is 8 bits; a single bit is an on/off switch. With 8 on/off switches, you can represent 256 different unique states. (2^8 is 256). A byte is just that, and it ends there. The bitsequence 00000101 is commonly understood to mean '5', but that's just conventions. The computer doesn't know what 5 is, it just knows bits and bytes, it keeps seeing 00000101. If you call System.out.println and pass that byte, and you see 5? That's println that decided to render it that way. It's not a universal truth about bytes.
In java specifically, all the various methods that interact with bytes, including println, have decreed that they interpret byte values as two's complement signed. That means that it counts up from 0 to 127, then 'rolls over' to -128, and as you keep incrementing your bits, goes back to -1, at which point we've covered all the 256 unique combinations (0, that's 1 combination. 127 positive integers, 128 negative ones: 1+127+128 is 256). But, again, just a choice.
This is where there is no such thing as a "signed byte" and an "unsigned byte", as far as the byte is concerned. The question of 'is it signed or unsigned' is for the code that prints to decide. When you put bytes on a wire or in a file, it's irrelevant. In that sense, the byte 255 and the byte -1 are the identical value. That value (The bit sequence 11111111) is printed as -1 if the code that does the printing decides to treat it as signed, and prints as 255 if the printing code decides to treat it as unsigned.
This explains, however, that you can't "just" put, say, '200' in a byte value because java treats things as signed (even the compiler), but this:
byte b = (byte) 200;
works fine and does exactly what you want.
However, even unsigned, bytes are still limited to the 0-255 range (or, at least, they can represent 256 unique things, and assuming we want to start at 0, that means only 0-255 is covered).
Thus, how do you represent higher numbers? Simply by adding more bytes.
The exact same thing happens when you count. We have 10 digits in the common western arabic numeral system: A single digit symbol can differentiate 10 different things. So what happens if you want to count up to 12?
Once you get to the 10th digit (the 9), and you want to add 1 more to it, what do we do?
We invent a second digit! We increment the second digit (from blank/0 to 1), and start our first digit (what used to be the 9) over from the beginning. Thus, after 9, we have 10.
You can do the exact same thing with bytes. A common digit covers 10 different things (0-9). A byte, however, covers 255 different things.
So what do you do when you want to 'roll over' and you need to add 1 to 255?
You add a second digit byte, and restart the first digit byte from 0 again.
So, we go from a single byte with bitsequence 11111111 (representing 255, let's say we treat it as unsigned for this exercise), and when you add one to that, we end up with 2 bytes. The first byte is 00000001 (representing 1), and the second byte is 00000000) representing 0. Just like we went from 9 to 10.
Just like with human decimal, computers treat 10 in bytes the same way: that leftmost digit now counts the amount of times we 'rolled over' our first digit. Except with bytes, of course, each 'rollover' was 256, whereas with human decimal digits, each rollover is 10. Mathematically: Decimal (western arabic numerals)) counting does 1 * 10^1 + 0 * 10^0, byte-based counting does 1 * 256^1 + 0 * 256^0. So, the byte sequence of [1, 0] is 256.
Let's do that math now on the byte sequence: 23, 112.
We 'rolled over' our 256-ranged byte 23 times, so that's 23 * 256 = 5888, and that final digitbyte adds 112 more. 5888 + 112 is... 6000!
Hence, whatever you did to turn 6000 into the byte array [0, 0, 23, 112]? That was correct. That is 6000 in big endian bytes.
NB: Little Endian means that the least number is sent first - that you write the digits in reverse order. 6000 in little endian is [112, 23, 0, 0]. Most protocols (networks, files, etc) use big-endian. Most CPUs work in big endian. But intel CPUs work in little endian, as in, if your computer has an intel chip and it stores 6000 in its own memory banks, it stores [112, 23, 0, 0]. Some protocols/file formats just dump memory, and they tend to be in little endian, because for a decade or two a lot of computers had intel chips in them. However, that era appears to be ending, as is the era of 'just dump memory straight to a file, voila, state saved'. Hence, little endian was never particularly relevant and is getting less relevant as time progresses.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | rzwitserloot |
