·9 min read·Developer Tools

Unix Timestamps Explained: What They Are and How to Use Them

Learn what Unix timestamps are, why developers use epoch time, how to convert between timestamps and dates, and common pitfalls with timezones and the Year 2038 problem.

Try our free Unix Timestamp Converter

Convert timestamps to dates and back. Auto-detects seconds vs milliseconds with timezone support.

Open Tool

What is a Unix Timestamp?

A Unix timestamp (also called epoch time, POSIX time, or simply Unix time) is the number of seconds that have elapsed since January 1, 1970 at 00:00:00 UTC. This specific moment is known as the Unix epoch, and it serves as the universal reference point for timekeeping in computing.

For example, the timestamp 1700000000 represents November 14, 2023 at 22:13:20 UTC. The timestamp 0 represents the epoch itself — midnight on January 1, 1970. Negative timestamps represent dates before the epoch: -86400 is December 31, 1969.

The choice of 1970 as the epoch was pragmatic rather than symbolic. When Ken Thompson and Dennis Ritchie were building Unix at Bell Labs, they needed a starting point. The original Unix system used a 32-bit counter that could represent about 136 years. Setting the epoch at 1970 gave them coverage from 1901 to 2038. Today, most systems have moved to 64-bit timestamps, extending the range to billions of years in either direction. Try converting timestamps yourself with our Unix Timestamp Converter.

Why Developers Use Epoch Time

Despite being a single number with no inherent meaning to humans, Unix timestamps have become the standard way to represent time in software for several compelling reasons:

  • 1. Timezone-independent: A Unix timestamp always represents the same instant in time, regardless of where you are. 1700000000 is the same moment whether you are in New York, Tokyo, or London. The local time representation changes, but the underlying value does not. This eliminates an entire class of bugs.
  • 2. Naturally sortable: Because timestamps are plain integers, sorting events chronologically is as simple as sorting numbers. No date parsing, no locale-aware comparisons — just ORDER BY created_at ASC.
  • 3. Compact storage: A Unix timestamp fits in a 32-bit or 64-bit integer — 4 or 8 bytes. An ISO 8601 date string like 2023-11-14T22:13:20.000Z takes 24 bytes. At scale, this adds up in database storage, network payloads, and log files.
  • 4. Arithmetic is trivial: Adding 3600 seconds gives you exactly one hour later. Subtracting two timestamps gives the exact duration in seconds. No need to worry about months having different lengths, daylight saving time shifts, or leap years.
  • 5. Universal language support: Every programming language, database, and operating system natively supports Unix timestamps. APIs almost universally include them in responses, making interoperability straightforward.

Seconds vs Milliseconds

One of the most common sources of bugs when working with timestamps is confusing seconds and milliseconds. Different languages and platforms use different precisions:

PrecisionDigitsUsed ByExample
Seconds10Python, PHP, Ruby, Unix/Linux, MySQL, PostgreSQL1700000000
Milliseconds13JavaScript, Java, C#, Dart, some REST APIs1700000000000
Microseconds16Go, Rust (chrono), PostgreSQL (internal)1700000000000000
Nanoseconds19Go (time.UnixNano), Rust, high-precision logging1700000000000000000

Quick rule of thumb: if the number has 10 digits, it's seconds. If it has 13 digits, it's milliseconds. Our Timestamp Converter auto-detects the precision, so you never have to guess.

A common bug: passing a seconds-based timestamp to a function that expects milliseconds. If you pass 1700000000 to JavaScript's new Date(), it interprets it as 1,700,000 seconds from epoch — which is January 20, 1970. You need new Date(1700000000 * 1000) to get the correct result.

Working with Timestamps in Code

JavaScript / TypeScript

// Get current timestamp
const nowMs = Date.now();              // 1700000000000 (milliseconds)
const nowSec = Math.floor(Date.now() / 1000);  // 1700000000 (seconds)

// Timestamp to Date
const date = new Date(1700000000 * 1000);  // multiply by 1000!
console.log(date.toISOString());           // "2023-11-14T22:13:20.000Z"

// Date to timestamp
const ts = Math.floor(new Date("2023-11-14").getTime() / 1000);

// Duration between two timestamps
const diff = ts2 - ts1;  // difference in seconds
const hours = Math.floor(diff / 3600);
const minutes = Math.floor((diff % 3600) / 60);

Python

import time
from datetime import datetime, timezone

# Get current timestamp (seconds)
now = time.time()                    # 1700000000.123456
now_int = int(time.time())           # 1700000000

# Timestamp to datetime
dt = datetime.fromtimestamp(1700000000, tz=timezone.utc)
print(dt.isoformat())               # "2023-11-14T22:13:20+00:00"

# Datetime to timestamp
ts = int(datetime(2023, 11, 14, tzinfo=timezone.utc).timestamp())

# IMPORTANT: Always use timezone-aware datetimes
# datetime.fromtimestamp(ts) without tz= uses LOCAL time — a common bug

SQL (MySQL / PostgreSQL)

-- MySQL: get current timestamp
SELECT UNIX_TIMESTAMP();                           -- 1700000000

-- MySQL: timestamp to date
SELECT FROM_UNIXTIME(1700000000);                  -- "2023-11-14 22:13:20"

-- PostgreSQL: timestamp to date
SELECT TO_TIMESTAMP(1700000000);                   -- "2023-11-14 22:13:20+00"

-- PostgreSQL: date to timestamp
SELECT EXTRACT(EPOCH FROM NOW())::integer;

-- Filter by time range (last 24 hours)
SELECT * FROM events
WHERE created_at >= UNIX_TIMESTAMP() - 86400;

When debugging timestamp issues, it helps to have a conversion tool open. Pair our Timestamp Converter with the UUID Generator when building systems that need both unique IDs and time tracking — UUID v7 embeds a Unix timestamp directly into the ID.

Timezone Pitfalls

Unix timestamps are inherently UTC, but the moment you convert them to human-readable dates, timezones creep in and introduce subtle bugs. Here are the most common pitfalls:

1. Server vs Client Timezone Mismatch

Your server generates a timestamp in UTC. Your frontend receives it and calls new Date(ts * 1000).toLocaleString(), which renders it in the user's local timezone. If the user is in UTC+9 (Tokyo), a timestamp from 11 PM UTC displays as 8 AM the next day. This is correct behavior, but if your UI shows "Created on November 14" when the user sees "November 15," it causes confusion. Always be explicit about which timezone you're displaying.

2. Daylight Saving Time (DST) Gaps

When DST begins, clocks skip forward — 2:00 AM becomes 3:00 AM instantly. When DST ends, clocks fall back and the same hour occurs twice. If you're scheduling events using local time strings instead of timestamps, you can end up with events that either fire twice or never fire at all. Always store and transmit times as Unix timestamps, and convert to local time only for display. If you're building scheduled tasks, our Cron Expression Generator can help you visualize the next run times in your target timezone.

3. Database Timezone Configuration

MySQL's TIMESTAMP column type stores values in UTC and converts to the session timezone on retrieval. DATETIME stores the literal value without timezone conversion. If your MySQL server's timezone changes (e.g., after a cloud migration), all TIMESTAMP values display differently, while DATETIME values stay the same. The safest approach is to store Unix timestamps as integers and handle all conversion in application code.

4. Leap Seconds

Unix time technically does not account for leap seconds — it pretends every day is exactly 86,400 seconds. In practice, this means a Unix timestamp can be off by up to 27 seconds from TAI (International Atomic Time) as of 2026. For most applications this is irrelevant, but if you're building systems that need sub-second precision across years (financial trading, scientific instrumentation), you need to be aware of this limitation.

The Year 2038 Problem

The Year 2038 problem (also called Y2K38 or the Epochalypse) is the timestamp equivalent of Y2K. It stems from a simple limitation: a signed 32-bit integer can store a maximum value of 2,147,483,647. When interpreted as a Unix timestamp, that value corresponds to Tuesday, January 19, 2038 at 03:14:07 UTC.

One second later, the counter overflows. In systems that use signed 32-bit timestamps, the value wraps around to -2,147,483,648, which the system interprets as December 13, 1901. This causes date comparisons to break, expired certificates to appear valid, and scheduled tasks to malfunction.

// Maximum signed 32-bit integer
2,147,483,647 → January 19, 2038 03:14:07 UTC

// Overflow (one second later)
-2,147,483,648 → December 13, 1901 20:45:52 UTC

// 64-bit maximum (signed)
9,223,372,036,854,775,807 → year 292,277,026,596
// We have some time before this is a problem.

Are You Affected?

  • - Modern 64-bit systems: Not affected. Linux (since kernel 5.6 for 32-bit), macOS, Windows, and all modern browsers use 64-bit timestamps.
  • - JavaScript: Not affected. JavaScript's Date uses 64-bit floating-point internally, supporting timestamps well beyond 2038.
  • - Python 3: Not affected. Python integers have arbitrary precision.
  • - Java: Not affected. System.currentTimeMillis() returns a 64-bit long.
  • - MySQL: Partially affected. The TIMESTAMP column type in MySQL 5.x is limited to 2038. MySQL 8.0+ has improved handling, but consider using BIGINT for timestamp storage.
  • - Embedded systems and IoT: Potentially affected. Many embedded devices still use 32-bit processors and older C libraries with 32-bit time_t.

The fix is straightforward: use 64-bit integers for all timestamp storage and processing. If you're building a new system today, this is almost certainly the default. If you maintain legacy systems, audit any code that uses int or int32 for timestamps and upgrade to int64 or long.

Frequently Asked Questions

What is epoch time?
Epoch time (Unix time) is the number of seconds that have elapsed since January 1, 1970 at 00:00:00 UTC. This reference point is called the Unix epoch. It provides a single, timezone-independent number to represent any moment in time.
How do I get the current Unix timestamp in JavaScript?
Use Date.now() to get the current timestamp in milliseconds. For seconds (the standard Unix format), use Math.floor(Date.now() / 1000). Both return a number — Date.now() is slightly more efficient than new Date().getTime() because it doesn't create a Date object.
Why do some timestamps have 10 digits and others have 13?
A 10-digit timestamp counts seconds since the epoch (used by Python, PHP, Unix shell commands, and most databases). A 13-digit timestamp counts milliseconds (used by JavaScript, Java, and many web APIs). Multiply seconds by 1000 to convert to milliseconds, or divide milliseconds by 1000 for seconds.
Will my code break in 2038?
Only if your system stores timestamps as signed 32-bit integers. The maximum value (2,147,483,647) corresponds to January 19, 2038. Modern 64-bit operating systems, JavaScript, Python 3, Java, and current database versions all use 64-bit timestamps and will not be affected. Check legacy C code, embedded systems, and old MySQL TIMESTAMP columns.

Convert Timestamps Now

Paste any timestamp or date and instantly convert between Unix time and human-readable formats. Auto-detects seconds vs milliseconds with full timezone support.

Open Unix Timestamp Converter

Related Tools