Reading about Postgresql data types and specifically about "Date/Time" types i noticed something wierd (For me at least).
The "Time" data type allocates the same storage size (8 bytes) as the "Timestamp" type. Although "Time" is responsible of storing only the time while "Timestamp" is storing both Date and Time being a super-set of time.
In addition both types have the exact same precision (1 microsecond / 14 digits) leaving me questioning why they both allocate 8 bytes unlike the "Date" Type which allocates 4?
Is it something internally which affects performance or what?
There are 86,400,000,000 microseconds in a day. That's more than 232, so the result can't be stored in 32 bits. The next best option is 64 bits, i.e. 8 bytes.
Compare that with the date
type which covers 4713BC to 5874897AD, i.e. 5879611 years. That's around 2147483820 days, which is less than 232 and so can be stored in a 32-bit integer. (In fact it's less than 231, which may make various things slightly simpler.)
See more on this question at Stackoverflow