In Java does anyone use short or byte?
up vote
28
down vote
favorite
Apart from using (byte) in streaming I don't really see byte and short used much. On the other hand I have seen long used where the actual value is |100| and byte would be more appropriate. Is this a consequence of the relative inexpensive nature of memory now or is this just minutia that developers needn't worry about?
java
add a comment |
up vote
28
down vote
favorite
Apart from using (byte) in streaming I don't really see byte and short used much. On the other hand I have seen long used where the actual value is |100| and byte would be more appropriate. Is this a consequence of the relative inexpensive nature of memory now or is this just minutia that developers needn't worry about?
java
add a comment |
up vote
28
down vote
favorite
up vote
28
down vote
favorite
Apart from using (byte) in streaming I don't really see byte and short used much. On the other hand I have seen long used where the actual value is |100| and byte would be more appropriate. Is this a consequence of the relative inexpensive nature of memory now or is this just minutia that developers needn't worry about?
java
Apart from using (byte) in streaming I don't really see byte and short used much. On the other hand I have seen long used where the actual value is |100| and byte would be more appropriate. Is this a consequence of the relative inexpensive nature of memory now or is this just minutia that developers needn't worry about?
java
java
edited Nov 9 at 18:16
Stephen Kennedy
7,039134866
7,039134866
asked Oct 8 '09 at 19:00
non sequitor
8,31663858
8,31663858
add a comment |
add a comment |
10 Answers
10
active
oldest
votes
up vote
33
down vote
accepted
They are used when programming for embedded devices that are short on memory or disk space. Such as appliances and other electronic devices.
Byte is also used in low level web programming, where you send requests to web servers using headers, etc.
That's why I don't see them, I'm never looking at appliance or electronics source code, cheers -- coming to think of it, that was the original intention of Java before it made waves in applets and then took off.
– non sequitor
Oct 8 '09 at 19:54
but then again, isnt java too slow and too big for appliances? isnt c/c++ the norm there?
– J. K.
Dec 23 '15 at 16:58
It is true to say that Java is used less in embedded devices, but still quite a bit. Java ME is quite small and ARM processors have a mode that can execute JVM bytecode directly. Garbage collection is not predictable though, so most real time applications would use lower level programming.
– mjaggard
Nov 22 '16 at 7:55
"devices that are short on memory" — No pun intended.
– MC Emperor
Nov 12 at 10:57
add a comment |
up vote
21
down vote
The byte
datatype is frequently used when dealing with raw data from a file or network connection, though it is mostly used as byte
. The short
and short
types are often used in connection with GUIs and image processing (for pixel locations & image sizes), and in sound processing.
The primary reason for using byte
or short
is one of clarity. The program code states uncategorically that only 8 or 16 bits are to be used, and when you accidentally use a larger type (without an appropriate typecast) you get a compilation error. (Admittedly, this could also be viewed as a nuisance when writing the code ... but once again the presence of the typecasts flags the fact that there is truncation happening to the reader.)
You don't achieve any space saving by using byte
or short
in simple variables instead of int
, because most Java implementations align stack variables and object members on word boundaries. However, primitive array types are handled differently; i.e. elements of boolean
, byte
, char
and short
arrays are byte aligned. But unless the arrays are large in size or large in number, they doesn't make any significant contribution to the app's overall memory usage.
So I guess that the main reason that developers don't use byte
or short
as much as you (a C developer?) might expect is that it really doesn't make much (or often any) difference. Java developers tend not to obsess over memory usage like old-school C developers did :-).
If you tried to obsess over memory usage, I think Java would drive you mad. However, as a Java Dev myself, it's nice to know that I'm 'doing my bit' so I always choose the most succinct type where possible. It's not only about memory usage, it's about clarity.
– Chris Hatton
Dec 31 '14 at 3:28
3
@ChrisHatton - Well "your bit" is probably wasted effort (or harmful) if you are concerned about memory usage and performance. Seriously, undirected micro-optimization is usually wasted effort. And I already mentioned the clarity issue.
– Stephen C
Dec 31 '14 at 3:33
add a comment |
up vote
13
down vote
In a 64-bit processor, the registers are all 64-bit so if your local variable is assigned to a register and is a boolean, byte, short, char, int, float, double or long it doesn't use memory and doesn't save any resources.
Objects are 8-byte aligned so they always take up a multiple of 8-byte in memory. This means Boolean, Byte, Short, Character, Integer, Long , Float and Double, AtomicBoolean, AtomicInteger, AtomicLong, AtomicReference all use the same amount of memory.
As has been noted, short types are used for arrays and reading/writing data formats. Even then short is not used very often IMHO.
Its also worth noting that a GB cost about £80 in a server, so a MB is about 8 pence and a KB is about 0.008 pence. The difference between byte and long is about 0.00006 pence. Your time is worth more than that. esp if you ever have a bug which resulted from having a data type which was too small.
Very good reminder about the boundaries. The calculation was also fun, but I'm pretty sure int or long weren't made just for avoiding bugs. It's more of a habit thing.
– Charles Roberto Canato
Apr 30 '14 at 22:56
add a comment |
up vote
5
down vote
I would most often use the short
and byte
types when working with binary formats and DataInput/DataOutput instances. If the spec says the next value is an 8bit or 16bit value and there's no value in promoting them to int
(perhaps they're bit flags), they are an obvious choice.
add a comment |
up vote
3
down vote
Arithmetic on byte
s and short
s is more awkward than with int
s. For example, if b1
and b2
are two byte
variables, you can't write byte b3 = b1 + b2
to add them. This is because Java never does arithmetic internally in anything smaller than an int
, so the expression b1 + b2
has type int
even though it is only adding two byte
values. You'd have to write byte b3 = (byte) (b1 + b2)
instead.
I'm just learning java and this is something I initially struggled with. I couldn't understand why my book always used the int type. For example it didn't make sense to me to use the int type for number of days in a month. So I would change it in my code and would later run into issues.
– Jeffpowrs
Aug 17 '13 at 16:45
add a comment |
up vote
2
down vote
I used short
extensively when creating an emulator based on a 16-bit architecture. I considered using char
so I could have stuff unsigned but the spirit of using a real integer type won out in the end.
edit: regarding the inevitable question about what I did when I needed the most significant bit: with the thing I was emulating it happened to almost never get used. In the few places it was used, I just used bitwise modifiers or math hackery.
add a comment |
up vote
1
down vote
I think in most applications short has no domain meaning, so it makes more sense to use Integer.
add a comment |
up vote
1
down vote
short
and others are often used for storing image data. Note that it is the number of bits which is really important, not the arithmetic properties (which just cause promotion to int
or better.
short
is also used as array indexes in JavaCard (1.0 and 2.0, IIRC, but not 3.0 which also has an HTTP stack and web services).
add a comment |
up vote
1
down vote
byte happens all the time; buffers, specifically for networks, files, graphics, serialization, etc.
add a comment |
up vote
0
down vote
Most of the time, there's never a real good technical reason for a developer (Java, C#, BASIC, etc.) to decide for an int, short or byte - when the capacity is enough, of course. If the value will be under 2 billion then int it will be.
Are you sure we'll have people older than 255? Well, you never know!
Aren't 32,767 possible countries enough? Don't think too small!
In your example, you can be perfectly happy with your byte var containing 100, if you are absolutely sure than it will NEVER overflow. Why do guys use int the most? Because.... because.
This is one of those things that most of us just do because we saw it that way most of the time, and never asked differently.
Of course, I have nothing against "all things int". I just prefer to use the right type for each kind of value, no stress involved.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f1539793%2fin-java-does-anyone-use-short-or-byte%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
10 Answers
10
active
oldest
votes
10 Answers
10
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
33
down vote
accepted
They are used when programming for embedded devices that are short on memory or disk space. Such as appliances and other electronic devices.
Byte is also used in low level web programming, where you send requests to web servers using headers, etc.
That's why I don't see them, I'm never looking at appliance or electronics source code, cheers -- coming to think of it, that was the original intention of Java before it made waves in applets and then took off.
– non sequitor
Oct 8 '09 at 19:54
but then again, isnt java too slow and too big for appliances? isnt c/c++ the norm there?
– J. K.
Dec 23 '15 at 16:58
It is true to say that Java is used less in embedded devices, but still quite a bit. Java ME is quite small and ARM processors have a mode that can execute JVM bytecode directly. Garbage collection is not predictable though, so most real time applications would use lower level programming.
– mjaggard
Nov 22 '16 at 7:55
"devices that are short on memory" — No pun intended.
– MC Emperor
Nov 12 at 10:57
add a comment |
up vote
33
down vote
accepted
They are used when programming for embedded devices that are short on memory or disk space. Such as appliances and other electronic devices.
Byte is also used in low level web programming, where you send requests to web servers using headers, etc.
That's why I don't see them, I'm never looking at appliance or electronics source code, cheers -- coming to think of it, that was the original intention of Java before it made waves in applets and then took off.
– non sequitor
Oct 8 '09 at 19:54
but then again, isnt java too slow and too big for appliances? isnt c/c++ the norm there?
– J. K.
Dec 23 '15 at 16:58
It is true to say that Java is used less in embedded devices, but still quite a bit. Java ME is quite small and ARM processors have a mode that can execute JVM bytecode directly. Garbage collection is not predictable though, so most real time applications would use lower level programming.
– mjaggard
Nov 22 '16 at 7:55
"devices that are short on memory" — No pun intended.
– MC Emperor
Nov 12 at 10:57
add a comment |
up vote
33
down vote
accepted
up vote
33
down vote
accepted
They are used when programming for embedded devices that are short on memory or disk space. Such as appliances and other electronic devices.
Byte is also used in low level web programming, where you send requests to web servers using headers, etc.
They are used when programming for embedded devices that are short on memory or disk space. Such as appliances and other electronic devices.
Byte is also used in low level web programming, where you send requests to web servers using headers, etc.
edited Aug 12 '10 at 18:32
answered Oct 8 '09 at 19:04
Shawn Mclean
28.4k79251383
28.4k79251383
That's why I don't see them, I'm never looking at appliance or electronics source code, cheers -- coming to think of it, that was the original intention of Java before it made waves in applets and then took off.
– non sequitor
Oct 8 '09 at 19:54
but then again, isnt java too slow and too big for appliances? isnt c/c++ the norm there?
– J. K.
Dec 23 '15 at 16:58
It is true to say that Java is used less in embedded devices, but still quite a bit. Java ME is quite small and ARM processors have a mode that can execute JVM bytecode directly. Garbage collection is not predictable though, so most real time applications would use lower level programming.
– mjaggard
Nov 22 '16 at 7:55
"devices that are short on memory" — No pun intended.
– MC Emperor
Nov 12 at 10:57
add a comment |
That's why I don't see them, I'm never looking at appliance or electronics source code, cheers -- coming to think of it, that was the original intention of Java before it made waves in applets and then took off.
– non sequitor
Oct 8 '09 at 19:54
but then again, isnt java too slow and too big for appliances? isnt c/c++ the norm there?
– J. K.
Dec 23 '15 at 16:58
It is true to say that Java is used less in embedded devices, but still quite a bit. Java ME is quite small and ARM processors have a mode that can execute JVM bytecode directly. Garbage collection is not predictable though, so most real time applications would use lower level programming.
– mjaggard
Nov 22 '16 at 7:55
"devices that are short on memory" — No pun intended.
– MC Emperor
Nov 12 at 10:57
That's why I don't see them, I'm never looking at appliance or electronics source code, cheers -- coming to think of it, that was the original intention of Java before it made waves in applets and then took off.
– non sequitor
Oct 8 '09 at 19:54
That's why I don't see them, I'm never looking at appliance or electronics source code, cheers -- coming to think of it, that was the original intention of Java before it made waves in applets and then took off.
– non sequitor
Oct 8 '09 at 19:54
but then again, isnt java too slow and too big for appliances? isnt c/c++ the norm there?
– J. K.
Dec 23 '15 at 16:58
but then again, isnt java too slow and too big for appliances? isnt c/c++ the norm there?
– J. K.
Dec 23 '15 at 16:58
It is true to say that Java is used less in embedded devices, but still quite a bit. Java ME is quite small and ARM processors have a mode that can execute JVM bytecode directly. Garbage collection is not predictable though, so most real time applications would use lower level programming.
– mjaggard
Nov 22 '16 at 7:55
It is true to say that Java is used less in embedded devices, but still quite a bit. Java ME is quite small and ARM processors have a mode that can execute JVM bytecode directly. Garbage collection is not predictable though, so most real time applications would use lower level programming.
– mjaggard
Nov 22 '16 at 7:55
"devices that are short on memory" — No pun intended.
– MC Emperor
Nov 12 at 10:57
"devices that are short on memory" — No pun intended.
– MC Emperor
Nov 12 at 10:57
add a comment |
up vote
21
down vote
The byte
datatype is frequently used when dealing with raw data from a file or network connection, though it is mostly used as byte
. The short
and short
types are often used in connection with GUIs and image processing (for pixel locations & image sizes), and in sound processing.
The primary reason for using byte
or short
is one of clarity. The program code states uncategorically that only 8 or 16 bits are to be used, and when you accidentally use a larger type (without an appropriate typecast) you get a compilation error. (Admittedly, this could also be viewed as a nuisance when writing the code ... but once again the presence of the typecasts flags the fact that there is truncation happening to the reader.)
You don't achieve any space saving by using byte
or short
in simple variables instead of int
, because most Java implementations align stack variables and object members on word boundaries. However, primitive array types are handled differently; i.e. elements of boolean
, byte
, char
and short
arrays are byte aligned. But unless the arrays are large in size or large in number, they doesn't make any significant contribution to the app's overall memory usage.
So I guess that the main reason that developers don't use byte
or short
as much as you (a C developer?) might expect is that it really doesn't make much (or often any) difference. Java developers tend not to obsess over memory usage like old-school C developers did :-).
If you tried to obsess over memory usage, I think Java would drive you mad. However, as a Java Dev myself, it's nice to know that I'm 'doing my bit' so I always choose the most succinct type where possible. It's not only about memory usage, it's about clarity.
– Chris Hatton
Dec 31 '14 at 3:28
3
@ChrisHatton - Well "your bit" is probably wasted effort (or harmful) if you are concerned about memory usage and performance. Seriously, undirected micro-optimization is usually wasted effort. And I already mentioned the clarity issue.
– Stephen C
Dec 31 '14 at 3:33
add a comment |
up vote
21
down vote
The byte
datatype is frequently used when dealing with raw data from a file or network connection, though it is mostly used as byte
. The short
and short
types are often used in connection with GUIs and image processing (for pixel locations & image sizes), and in sound processing.
The primary reason for using byte
or short
is one of clarity. The program code states uncategorically that only 8 or 16 bits are to be used, and when you accidentally use a larger type (without an appropriate typecast) you get a compilation error. (Admittedly, this could also be viewed as a nuisance when writing the code ... but once again the presence of the typecasts flags the fact that there is truncation happening to the reader.)
You don't achieve any space saving by using byte
or short
in simple variables instead of int
, because most Java implementations align stack variables and object members on word boundaries. However, primitive array types are handled differently; i.e. elements of boolean
, byte
, char
and short
arrays are byte aligned. But unless the arrays are large in size or large in number, they doesn't make any significant contribution to the app's overall memory usage.
So I guess that the main reason that developers don't use byte
or short
as much as you (a C developer?) might expect is that it really doesn't make much (or often any) difference. Java developers tend not to obsess over memory usage like old-school C developers did :-).
If you tried to obsess over memory usage, I think Java would drive you mad. However, as a Java Dev myself, it's nice to know that I'm 'doing my bit' so I always choose the most succinct type where possible. It's not only about memory usage, it's about clarity.
– Chris Hatton
Dec 31 '14 at 3:28
3
@ChrisHatton - Well "your bit" is probably wasted effort (or harmful) if you are concerned about memory usage and performance. Seriously, undirected micro-optimization is usually wasted effort. And I already mentioned the clarity issue.
– Stephen C
Dec 31 '14 at 3:33
add a comment |
up vote
21
down vote
up vote
21
down vote
The byte
datatype is frequently used when dealing with raw data from a file or network connection, though it is mostly used as byte
. The short
and short
types are often used in connection with GUIs and image processing (for pixel locations & image sizes), and in sound processing.
The primary reason for using byte
or short
is one of clarity. The program code states uncategorically that only 8 or 16 bits are to be used, and when you accidentally use a larger type (without an appropriate typecast) you get a compilation error. (Admittedly, this could also be viewed as a nuisance when writing the code ... but once again the presence of the typecasts flags the fact that there is truncation happening to the reader.)
You don't achieve any space saving by using byte
or short
in simple variables instead of int
, because most Java implementations align stack variables and object members on word boundaries. However, primitive array types are handled differently; i.e. elements of boolean
, byte
, char
and short
arrays are byte aligned. But unless the arrays are large in size or large in number, they doesn't make any significant contribution to the app's overall memory usage.
So I guess that the main reason that developers don't use byte
or short
as much as you (a C developer?) might expect is that it really doesn't make much (or often any) difference. Java developers tend not to obsess over memory usage like old-school C developers did :-).
The byte
datatype is frequently used when dealing with raw data from a file or network connection, though it is mostly used as byte
. The short
and short
types are often used in connection with GUIs and image processing (for pixel locations & image sizes), and in sound processing.
The primary reason for using byte
or short
is one of clarity. The program code states uncategorically that only 8 or 16 bits are to be used, and when you accidentally use a larger type (without an appropriate typecast) you get a compilation error. (Admittedly, this could also be viewed as a nuisance when writing the code ... but once again the presence of the typecasts flags the fact that there is truncation happening to the reader.)
You don't achieve any space saving by using byte
or short
in simple variables instead of int
, because most Java implementations align stack variables and object members on word boundaries. However, primitive array types are handled differently; i.e. elements of boolean
, byte
, char
and short
arrays are byte aligned. But unless the arrays are large in size or large in number, they doesn't make any significant contribution to the app's overall memory usage.
So I guess that the main reason that developers don't use byte
or short
as much as you (a C developer?) might expect is that it really doesn't make much (or often any) difference. Java developers tend not to obsess over memory usage like old-school C developers did :-).
edited Jun 21 '13 at 4:51
answered Oct 9 '09 at 2:03
Stephen C
512k69560912
512k69560912
If you tried to obsess over memory usage, I think Java would drive you mad. However, as a Java Dev myself, it's nice to know that I'm 'doing my bit' so I always choose the most succinct type where possible. It's not only about memory usage, it's about clarity.
– Chris Hatton
Dec 31 '14 at 3:28
3
@ChrisHatton - Well "your bit" is probably wasted effort (or harmful) if you are concerned about memory usage and performance. Seriously, undirected micro-optimization is usually wasted effort. And I already mentioned the clarity issue.
– Stephen C
Dec 31 '14 at 3:33
add a comment |
If you tried to obsess over memory usage, I think Java would drive you mad. However, as a Java Dev myself, it's nice to know that I'm 'doing my bit' so I always choose the most succinct type where possible. It's not only about memory usage, it's about clarity.
– Chris Hatton
Dec 31 '14 at 3:28
3
@ChrisHatton - Well "your bit" is probably wasted effort (or harmful) if you are concerned about memory usage and performance. Seriously, undirected micro-optimization is usually wasted effort. And I already mentioned the clarity issue.
– Stephen C
Dec 31 '14 at 3:33
If you tried to obsess over memory usage, I think Java would drive you mad. However, as a Java Dev myself, it's nice to know that I'm 'doing my bit' so I always choose the most succinct type where possible. It's not only about memory usage, it's about clarity.
– Chris Hatton
Dec 31 '14 at 3:28
If you tried to obsess over memory usage, I think Java would drive you mad. However, as a Java Dev myself, it's nice to know that I'm 'doing my bit' so I always choose the most succinct type where possible. It's not only about memory usage, it's about clarity.
– Chris Hatton
Dec 31 '14 at 3:28
3
3
@ChrisHatton - Well "your bit" is probably wasted effort (or harmful) if you are concerned about memory usage and performance. Seriously, undirected micro-optimization is usually wasted effort. And I already mentioned the clarity issue.
– Stephen C
Dec 31 '14 at 3:33
@ChrisHatton - Well "your bit" is probably wasted effort (or harmful) if you are concerned about memory usage and performance. Seriously, undirected micro-optimization is usually wasted effort. And I already mentioned the clarity issue.
– Stephen C
Dec 31 '14 at 3:33
add a comment |
up vote
13
down vote
In a 64-bit processor, the registers are all 64-bit so if your local variable is assigned to a register and is a boolean, byte, short, char, int, float, double or long it doesn't use memory and doesn't save any resources.
Objects are 8-byte aligned so they always take up a multiple of 8-byte in memory. This means Boolean, Byte, Short, Character, Integer, Long , Float and Double, AtomicBoolean, AtomicInteger, AtomicLong, AtomicReference all use the same amount of memory.
As has been noted, short types are used for arrays and reading/writing data formats. Even then short is not used very often IMHO.
Its also worth noting that a GB cost about £80 in a server, so a MB is about 8 pence and a KB is about 0.008 pence. The difference between byte and long is about 0.00006 pence. Your time is worth more than that. esp if you ever have a bug which resulted from having a data type which was too small.
Very good reminder about the boundaries. The calculation was also fun, but I'm pretty sure int or long weren't made just for avoiding bugs. It's more of a habit thing.
– Charles Roberto Canato
Apr 30 '14 at 22:56
add a comment |
up vote
13
down vote
In a 64-bit processor, the registers are all 64-bit so if your local variable is assigned to a register and is a boolean, byte, short, char, int, float, double or long it doesn't use memory and doesn't save any resources.
Objects are 8-byte aligned so they always take up a multiple of 8-byte in memory. This means Boolean, Byte, Short, Character, Integer, Long , Float and Double, AtomicBoolean, AtomicInteger, AtomicLong, AtomicReference all use the same amount of memory.
As has been noted, short types are used for arrays and reading/writing data formats. Even then short is not used very often IMHO.
Its also worth noting that a GB cost about £80 in a server, so a MB is about 8 pence and a KB is about 0.008 pence. The difference between byte and long is about 0.00006 pence. Your time is worth more than that. esp if you ever have a bug which resulted from having a data type which was too small.
Very good reminder about the boundaries. The calculation was also fun, but I'm pretty sure int or long weren't made just for avoiding bugs. It's more of a habit thing.
– Charles Roberto Canato
Apr 30 '14 at 22:56
add a comment |
up vote
13
down vote
up vote
13
down vote
In a 64-bit processor, the registers are all 64-bit so if your local variable is assigned to a register and is a boolean, byte, short, char, int, float, double or long it doesn't use memory and doesn't save any resources.
Objects are 8-byte aligned so they always take up a multiple of 8-byte in memory. This means Boolean, Byte, Short, Character, Integer, Long , Float and Double, AtomicBoolean, AtomicInteger, AtomicLong, AtomicReference all use the same amount of memory.
As has been noted, short types are used for arrays and reading/writing data formats. Even then short is not used very often IMHO.
Its also worth noting that a GB cost about £80 in a server, so a MB is about 8 pence and a KB is about 0.008 pence. The difference between byte and long is about 0.00006 pence. Your time is worth more than that. esp if you ever have a bug which resulted from having a data type which was too small.
In a 64-bit processor, the registers are all 64-bit so if your local variable is assigned to a register and is a boolean, byte, short, char, int, float, double or long it doesn't use memory and doesn't save any resources.
Objects are 8-byte aligned so they always take up a multiple of 8-byte in memory. This means Boolean, Byte, Short, Character, Integer, Long , Float and Double, AtomicBoolean, AtomicInteger, AtomicLong, AtomicReference all use the same amount of memory.
As has been noted, short types are used for arrays and reading/writing data formats. Even then short is not used very often IMHO.
Its also worth noting that a GB cost about £80 in a server, so a MB is about 8 pence and a KB is about 0.008 pence. The difference between byte and long is about 0.00006 pence. Your time is worth more than that. esp if you ever have a bug which resulted from having a data type which was too small.
edited Aug 12 '10 at 19:41
answered Oct 10 '09 at 9:34
Peter Lawrey
440k55557957
440k55557957
Very good reminder about the boundaries. The calculation was also fun, but I'm pretty sure int or long weren't made just for avoiding bugs. It's more of a habit thing.
– Charles Roberto Canato
Apr 30 '14 at 22:56
add a comment |
Very good reminder about the boundaries. The calculation was also fun, but I'm pretty sure int or long weren't made just for avoiding bugs. It's more of a habit thing.
– Charles Roberto Canato
Apr 30 '14 at 22:56
Very good reminder about the boundaries. The calculation was also fun, but I'm pretty sure int or long weren't made just for avoiding bugs. It's more of a habit thing.
– Charles Roberto Canato
Apr 30 '14 at 22:56
Very good reminder about the boundaries. The calculation was also fun, but I'm pretty sure int or long weren't made just for avoiding bugs. It's more of a habit thing.
– Charles Roberto Canato
Apr 30 '14 at 22:56
add a comment |
up vote
5
down vote
I would most often use the short
and byte
types when working with binary formats and DataInput/DataOutput instances. If the spec says the next value is an 8bit or 16bit value and there's no value in promoting them to int
(perhaps they're bit flags), they are an obvious choice.
add a comment |
up vote
5
down vote
I would most often use the short
and byte
types when working with binary formats and DataInput/DataOutput instances. If the spec says the next value is an 8bit or 16bit value and there's no value in promoting them to int
(perhaps they're bit flags), they are an obvious choice.
add a comment |
up vote
5
down vote
up vote
5
down vote
I would most often use the short
and byte
types when working with binary formats and DataInput/DataOutput instances. If the spec says the next value is an 8bit or 16bit value and there's no value in promoting them to int
(perhaps they're bit flags), they are an obvious choice.
I would most often use the short
and byte
types when working with binary formats and DataInput/DataOutput instances. If the spec says the next value is an 8bit or 16bit value and there's no value in promoting them to int
(perhaps they're bit flags), they are an obvious choice.
answered Oct 8 '09 at 19:26
McDowell
93.8k23172245
93.8k23172245
add a comment |
add a comment |
up vote
3
down vote
Arithmetic on byte
s and short
s is more awkward than with int
s. For example, if b1
and b2
are two byte
variables, you can't write byte b3 = b1 + b2
to add them. This is because Java never does arithmetic internally in anything smaller than an int
, so the expression b1 + b2
has type int
even though it is only adding two byte
values. You'd have to write byte b3 = (byte) (b1 + b2)
instead.
I'm just learning java and this is something I initially struggled with. I couldn't understand why my book always used the int type. For example it didn't make sense to me to use the int type for number of days in a month. So I would change it in my code and would later run into issues.
– Jeffpowrs
Aug 17 '13 at 16:45
add a comment |
up vote
3
down vote
Arithmetic on byte
s and short
s is more awkward than with int
s. For example, if b1
and b2
are two byte
variables, you can't write byte b3 = b1 + b2
to add them. This is because Java never does arithmetic internally in anything smaller than an int
, so the expression b1 + b2
has type int
even though it is only adding two byte
values. You'd have to write byte b3 = (byte) (b1 + b2)
instead.
I'm just learning java and this is something I initially struggled with. I couldn't understand why my book always used the int type. For example it didn't make sense to me to use the int type for number of days in a month. So I would change it in my code and would later run into issues.
– Jeffpowrs
Aug 17 '13 at 16:45
add a comment |
up vote
3
down vote
up vote
3
down vote
Arithmetic on byte
s and short
s is more awkward than with int
s. For example, if b1
and b2
are two byte
variables, you can't write byte b3 = b1 + b2
to add them. This is because Java never does arithmetic internally in anything smaller than an int
, so the expression b1 + b2
has type int
even though it is only adding two byte
values. You'd have to write byte b3 = (byte) (b1 + b2)
instead.
Arithmetic on byte
s and short
s is more awkward than with int
s. For example, if b1
and b2
are two byte
variables, you can't write byte b3 = b1 + b2
to add them. This is because Java never does arithmetic internally in anything smaller than an int
, so the expression b1 + b2
has type int
even though it is only adding two byte
values. You'd have to write byte b3 = (byte) (b1 + b2)
instead.
answered Aug 12 '10 at 19:26
Luke Woodward
44.3k126587
44.3k126587
I'm just learning java and this is something I initially struggled with. I couldn't understand why my book always used the int type. For example it didn't make sense to me to use the int type for number of days in a month. So I would change it in my code and would later run into issues.
– Jeffpowrs
Aug 17 '13 at 16:45
add a comment |
I'm just learning java and this is something I initially struggled with. I couldn't understand why my book always used the int type. For example it didn't make sense to me to use the int type for number of days in a month. So I would change it in my code and would later run into issues.
– Jeffpowrs
Aug 17 '13 at 16:45
I'm just learning java and this is something I initially struggled with. I couldn't understand why my book always used the int type. For example it didn't make sense to me to use the int type for number of days in a month. So I would change it in my code and would later run into issues.
– Jeffpowrs
Aug 17 '13 at 16:45
I'm just learning java and this is something I initially struggled with. I couldn't understand why my book always used the int type. For example it didn't make sense to me to use the int type for number of days in a month. So I would change it in my code and would later run into issues.
– Jeffpowrs
Aug 17 '13 at 16:45
add a comment |
up vote
2
down vote
I used short
extensively when creating an emulator based on a 16-bit architecture. I considered using char
so I could have stuff unsigned but the spirit of using a real integer type won out in the end.
edit: regarding the inevitable question about what I did when I needed the most significant bit: with the thing I was emulating it happened to almost never get used. In the few places it was used, I just used bitwise modifiers or math hackery.
add a comment |
up vote
2
down vote
I used short
extensively when creating an emulator based on a 16-bit architecture. I considered using char
so I could have stuff unsigned but the spirit of using a real integer type won out in the end.
edit: regarding the inevitable question about what I did when I needed the most significant bit: with the thing I was emulating it happened to almost never get used. In the few places it was used, I just used bitwise modifiers or math hackery.
add a comment |
up vote
2
down vote
up vote
2
down vote
I used short
extensively when creating an emulator based on a 16-bit architecture. I considered using char
so I could have stuff unsigned but the spirit of using a real integer type won out in the end.
edit: regarding the inevitable question about what I did when I needed the most significant bit: with the thing I was emulating it happened to almost never get used. In the few places it was used, I just used bitwise modifiers or math hackery.
I used short
extensively when creating an emulator based on a 16-bit architecture. I considered using char
so I could have stuff unsigned but the spirit of using a real integer type won out in the end.
edit: regarding the inevitable question about what I did when I needed the most significant bit: with the thing I was emulating it happened to almost never get used. In the few places it was used, I just used bitwise modifiers or math hackery.
edited Sep 21 '10 at 2:07
answered Sep 21 '10 at 2:02
Dinah
25.5k28118139
25.5k28118139
add a comment |
add a comment |
up vote
1
down vote
I think in most applications short has no domain meaning, so it makes more sense to use Integer.
add a comment |
up vote
1
down vote
I think in most applications short has no domain meaning, so it makes more sense to use Integer.
add a comment |
up vote
1
down vote
up vote
1
down vote
I think in most applications short has no domain meaning, so it makes more sense to use Integer.
I think in most applications short has no domain meaning, so it makes more sense to use Integer.
answered Oct 8 '09 at 19:03
C. Ross
17.8k34124217
17.8k34124217
add a comment |
add a comment |
up vote
1
down vote
short
and others are often used for storing image data. Note that it is the number of bits which is really important, not the arithmetic properties (which just cause promotion to int
or better.
short
is also used as array indexes in JavaCard (1.0 and 2.0, IIRC, but not 3.0 which also has an HTTP stack and web services).
add a comment |
up vote
1
down vote
short
and others are often used for storing image data. Note that it is the number of bits which is really important, not the arithmetic properties (which just cause promotion to int
or better.
short
is also used as array indexes in JavaCard (1.0 and 2.0, IIRC, but not 3.0 which also has an HTTP stack and web services).
add a comment |
up vote
1
down vote
up vote
1
down vote
short
and others are often used for storing image data. Note that it is the number of bits which is really important, not the arithmetic properties (which just cause promotion to int
or better.
short
is also used as array indexes in JavaCard (1.0 and 2.0, IIRC, but not 3.0 which also has an HTTP stack and web services).
short
and others are often used for storing image data. Note that it is the number of bits which is really important, not the arithmetic properties (which just cause promotion to int
or better.
short
is also used as array indexes in JavaCard (1.0 and 2.0, IIRC, but not 3.0 which also has an HTTP stack and web services).
answered Oct 8 '09 at 20:21
Tom Hawtin - tackline
125k28179266
125k28179266
add a comment |
add a comment |
up vote
1
down vote
byte happens all the time; buffers, specifically for networks, files, graphics, serialization, etc.
add a comment |
up vote
1
down vote
byte happens all the time; buffers, specifically for networks, files, graphics, serialization, etc.
add a comment |
up vote
1
down vote
up vote
1
down vote
byte happens all the time; buffers, specifically for networks, files, graphics, serialization, etc.
byte happens all the time; buffers, specifically for networks, files, graphics, serialization, etc.
answered Aug 12 '10 at 19:46
Dean J
21k135390
21k135390
add a comment |
add a comment |
up vote
0
down vote
Most of the time, there's never a real good technical reason for a developer (Java, C#, BASIC, etc.) to decide for an int, short or byte - when the capacity is enough, of course. If the value will be under 2 billion then int it will be.
Are you sure we'll have people older than 255? Well, you never know!
Aren't 32,767 possible countries enough? Don't think too small!
In your example, you can be perfectly happy with your byte var containing 100, if you are absolutely sure than it will NEVER overflow. Why do guys use int the most? Because.... because.
This is one of those things that most of us just do because we saw it that way most of the time, and never asked differently.
Of course, I have nothing against "all things int". I just prefer to use the right type for each kind of value, no stress involved.
add a comment |
up vote
0
down vote
Most of the time, there's never a real good technical reason for a developer (Java, C#, BASIC, etc.) to decide for an int, short or byte - when the capacity is enough, of course. If the value will be under 2 billion then int it will be.
Are you sure we'll have people older than 255? Well, you never know!
Aren't 32,767 possible countries enough? Don't think too small!
In your example, you can be perfectly happy with your byte var containing 100, if you are absolutely sure than it will NEVER overflow. Why do guys use int the most? Because.... because.
This is one of those things that most of us just do because we saw it that way most of the time, and never asked differently.
Of course, I have nothing against "all things int". I just prefer to use the right type for each kind of value, no stress involved.
add a comment |
up vote
0
down vote
up vote
0
down vote
Most of the time, there's never a real good technical reason for a developer (Java, C#, BASIC, etc.) to decide for an int, short or byte - when the capacity is enough, of course. If the value will be under 2 billion then int it will be.
Are you sure we'll have people older than 255? Well, you never know!
Aren't 32,767 possible countries enough? Don't think too small!
In your example, you can be perfectly happy with your byte var containing 100, if you are absolutely sure than it will NEVER overflow. Why do guys use int the most? Because.... because.
This is one of those things that most of us just do because we saw it that way most of the time, and never asked differently.
Of course, I have nothing against "all things int". I just prefer to use the right type for each kind of value, no stress involved.
Most of the time, there's never a real good technical reason for a developer (Java, C#, BASIC, etc.) to decide for an int, short or byte - when the capacity is enough, of course. If the value will be under 2 billion then int it will be.
Are you sure we'll have people older than 255? Well, you never know!
Aren't 32,767 possible countries enough? Don't think too small!
In your example, you can be perfectly happy with your byte var containing 100, if you are absolutely sure than it will NEVER overflow. Why do guys use int the most? Because.... because.
This is one of those things that most of us just do because we saw it that way most of the time, and never asked differently.
Of course, I have nothing against "all things int". I just prefer to use the right type for each kind of value, no stress involved.
answered Apr 30 '14 at 23:12
Charles Roberto Canato
17829
17829
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f1539793%2fin-java-does-anyone-use-short-or-byte%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown