How it works
Unix time (also called epoch time) counts whole seconds since 00:00:00 on 1 January 1970 in Coordinated Universal Time (UTC), ignoring leap seconds. A timestamp like 1,700,000,000 represents a specific second on that global clock.
When you enter a calendar date and time—year, month, day, hour, minute, and second—the calculator constructs a UTC date from those fields and then computes the number of seconds between that instant and the Unix epoch start. Internally, it is equivalent to using a `Date.UTC(…)` style call, then dividing the resulting milliseconds by 1,000 and truncating to an integer.
Going in the other direction, when you paste a Unix timestamp into the input field, the calculator multiplies it by 1,000 to get milliseconds, constructs a UTC date object at that instant, and then extracts the year, month, day, hour, minute, and second components.
Because Unix timestamps are defined in UTC, the conversions in this tool intentionally ignore your local timezone offset. That ensures that a given numeric timestamp always maps to the same UTC date/time here as it does in your servers and APIs.
The output includes both individual numeric date parts (year, month, day, hour, minute, second) and a combined human‑readable UTC string. That makes it easy to compare against logs, database rows, or external documentation that refer to specific times.
If you are working with millisecond or microsecond precision, you can still use the calculator by manually trimming or scaling the value (for example, dividing a millisecond timestamp by 1,000 to convert it to whole seconds before pasting).
Formula
Unix timestamp (seconds) = floor(Date.UTC(year, month, day, hour, minute, second) ÷ 1000)
Converted UTC date = Date constructed from (Unix timestamp × 1000 milliseconds) and then decomposed into year, month, day, hour, minute, and second.