Ttooleras
Crypto & Security

MD5, SHA-1, SHA-256: Three Kinds of Hashing Everyone Confuses

Hash is one word for three completely different jobs — and "just use SHA-256 for passwords" is how databases get cracked. This guide separates general hashing, password hashing, and HMAC, with current 2026 recommendations for each and honest guidance on where MD5 still works.

Tooleras24 min read5,159 words
Advertisement

A team once shipped a login system that stored passwords as sha256(password) with no salt. No bcrypt, no Argon2, no work factor — just a 64-character hex string in a password_hash column. The code review missed it. It ran for eighteen months. When the database eventually leaked (it always does), every password from that system was on a cracker's desk within two hours. A modern GPU computes around 180 billion SHA-256 hashes per second. Against an 8-character password, that's not brute force — that's a coffee break.

The mistake is common because "hash" is one word for three completely different jobs. General-purpose cryptographic hashes like SHA-256 exist to detect modification. Password hashes like bcrypt and Argon2 exist to be slow on purpose, so attackers can't grind through stolen databases. HMAC exists to prove a message came from someone with a shared secret. All three are technically hashing in some sense. None is a substitute for the others. Most posts you read online mix them together, which is why the "just use SHA-256 for passwords" mistake keeps happening.

This post separates the three lanes and tells you, for 2026, which algorithm belongs in which job. It covers MD5's actually-broken-but-still-useful status, the real story on SHA-1 (retired by NIST in 2022, not just "deprecated"), where SHA-256 sits today after recent collision research, why BLAKE3 matters, and the password-hashing landscape after OWASP's latest guidance. Then HMAC and the one pitfall that breaks most webhook validators — timing-safe comparison. If you just need a hash right now, paste into our hash generator. If you want to understand what you're actually doing, read on.

What a hash function actually is

Strip away the terminology and a hash function is a pure function that takes any input and returns a fixed-size output. For SHA-256, that output is always 256 bits (32 bytes, 64 hex characters) no matter if you hash a single byte or a 10GB file. Four properties matter in practice:

Deterministic. Hash the same input twice, get the same output. This is why hashes work for file integrity — if the file changed, the hash changed.

Fixed size. Input length doesn't matter to the output length. Hashing a tweet and hashing the complete works of Shakespeare both produce 256 bits.

One-way. Given a hash, you can't recover the input. There's no unhash() function. The only way to find an input that produces a given hash is to guess inputs and hash them until you match.

Avalanche. One bit of input change produces a completely different hash. Tiny changes are not "close." A single typo turns "Hello" into something whose SHA-256 shares zero bytes with the original's SHA-256.

These four properties are shared by every modern cryptographic hash. Where they diverge is in size, speed, and the quality of that one-way and avalanche guarantee against adversaries specifically trying to break it. That's the story of MD5, SHA-1, and the SHA-2 family.

Lane 1: general-purpose hashing

Start here because this is what people usually mean by "hashing." Detecting file corruption. Computing a content identifier. Digesting a message for a digital signature. Indexing by content. None of these involve passwords. None involve secrets.

MD5

Output: 128 bits (32 hex characters). Designed in 1991 by Ron Rivest. Once ubiquitous; now cryptographically broken.

MD5("Hello") = 8b1a9953c4611296a827abf8c47804d7

"Broken" means specific things. You can construct two distinct inputs with the same MD5 hash (practical since 2004, weaponized in 2012 by the Flame malware to forge Windows Update certificates). That's a collision attack. Preimage attacks (given a hash, find an input) are harder but the security margin has collapsed to the point where MD5 should never be in anything an attacker might target.

But "broken for security" and "useless" aren't the same thing. MD5 is still in daily production use for:

  • Cache keys — hashing a URL to build a filename, where nobody's trying to collide with your cache
  • Content-addressable storage for non-adversarial data (though Git famously uses SHA-1 and is slowly migrating to SHA-256)
  • Non-cryptographic deduplication — detecting accidental duplicates in a dataset
  • Quick corruption detection where you know the source is trustworthy

If nobody's actively trying to attack your hash, MD5 is fast and fine. The moment your threat model includes someone trying to forge something, MD5 goes in the bin.

SHA-1

Output: 160 bits (40 hex characters). Designed by the NSA in 1995 as a fix for earlier SHA-0. Used to be everywhere — TLS certificates, Git commits, code signing.

SHA-1 was broken in 2017 with the SHAttered attack, which produced two different PDFs with the same SHA-1 hash. Google demonstrated this on their infrastructure. The attack cost something like $110,000 in compute at the time; it's cheaper now. NIST formally retired SHA-1 for government use in December 2022. Browsers stopped trusting SHA-1 TLS certificates years before that.

Like MD5, SHA-1 persists in legacy contexts (Git's internal hash, some older protocols) but for new code you should skip it entirely. There's no situation in 2026 where SHA-1 is the best choice — SHA-256 is faster on modern hardware and actually secure.

SHA-256

Output: 256 bits (64 hex characters). Part of the SHA-2 family, designed by the NSA in 2001. The current default for general-purpose cryptographic hashing.

SHA-256("Hello") = 185f8db32271fe25f561a6fc938b2e264306ec304eda518007d1764826381969

SHA-256 is used by Bitcoin, TLS certificates, code signing, JWTs (when HS256 is the algorithm), Git's ongoing migration from SHA-1, and just about every new cryptographic protocol. As of 2026 there are no practical collision attacks against the full 64-round SHA-256. Recent research in 2024 demonstrated the first practical collision for a reduced 31-round variant (the full algorithm has 64 rounds), but that's a cryptanalytic milestone, not a break of the real thing. Full SHA-256 remains secure.

Unless you have a specific reason to pick something else, SHA-256 is your default.

SHA-512 and the rest of SHA-2

Output: 512 bits. Same design as SHA-256 with 64-bit internal words instead of 32-bit, which ironically makes it faster on 64-bit CPUs for most inputs. SHA-384 and SHA-224 are truncated variants of SHA-512 and SHA-256 respectively — useful when a protocol requires a specific output length.

Use SHA-512 when you want the extra security margin (twice as many bits means twice as much brute-force effort), or when you're on a 64-bit CPU and want marginally better performance. For most applications SHA-256 is fine and matches the defaults everyone else uses.

SHA-3

Output: configurable (224, 256, 384, 512 bits depending on the variant). Selected in 2012 by NIST as a structurally different alternative to SHA-2 after the SHA-1 break scared people into wanting diversity. SHA-3 uses the Keccak sponge construction — completely different math from SHA-2's Merkle–Damgård.

SHA-3 is secure, it's slower than SHA-256 in software (faster in hardware), and it has real use cases when you want two structurally different hash algorithms for defense in depth (hash something twice with different families, and a break in one doesn't compromise you). Most applications don't need this and stick with SHA-256.

BLAKE2 and BLAKE3

BLAKE2 (output configurable up to 512 bits) and BLAKE3 (default 256 bits) are modern designs that prioritize speed without sacrificing security. BLAKE3 in particular is fast — on a current laptop it hashes a large file at multiple gigabytes per second, measurably faster than SHA-256, and its tree-based construction can parallelize across CPU cores for very large inputs.

BLAKE3 is worth using when you're hashing a lot of data and the performance matters (content-defined chunking, large-file integrity, high-throughput logging). For a typical REST API that hashes tiny payloads now and then, SHA-256 is fine and has broader library support. The Argon2id password hash we'll cover later is built on BLAKE2.

Quick decision for general hashing

JobUseAvoid
File integrity, digital signatures, JWTs, any security-sensitive hashSHA-256MD5, SHA-1
High-throughput content hashing, big filesBLAKE3MD5, SHA-1
Cache keys, non-adversarial deduplicationMD5 (still fine) or xxHash
Legacy compatibility onlywhatever the legacy system uses
Specific protocol requires an odd lengthSHA-384, SHA-224, etc.

Paste any text or file into our hash generator to see the same input through MD5, SHA-1, SHA-256, SHA-512, and a couple of BLAKE variants side by side. That's the quickest way to build intuition for how output differs per algorithm and how all of them exhibit the avalanche effect.

So is MD5 actually broken?

The answer depends on what you mean by "broken" and what your threat model is.

Against an attacker trying to forge something: yes. Completely. Don't use it. The Flame malware created a forged Microsoft certificate by exploiting an MD5 collision. Stop.

For non-adversarial use where nobody is trying to game your hashes: no, it's fine. A web cache using MD5 of the URL as a filename is in no danger. A deduplication pipeline processing photos from your own users isn't at risk. Git uses SHA-1 for content addressing and is gradually migrating to SHA-256 — but the migration isn't because anyone attacked Git repos at scale; it's because "the hash function we rely on is broken" is a correctness risk even if nobody's actively exploiting it, and cryptographic hygiene dictates upgrading before you have to.

The absolutist "never use MD5" advice is well-intentioned but wrong in its framing. The real advice is simpler: if someone might benefit from forging a hash collision, use SHA-256 or better. If not, MD5 is still a fast, compact, widely-supported hash that does one job well. Many posts will tell you otherwise; I'd rather be accurate than alarmist.

That said, there's almost no cost to defaulting to SHA-256. Library support is universal. Performance is close enough to not matter for most use cases. If you're writing new code and can't decide, pick SHA-256 and move on.

Recent hash security news (2024–2025)

Three things happened recently that are worth knowing about.

NIST officially retired SHA-1 in December 2022. This wasn't deprecation. This was retirement — federal systems cannot use SHA-1 for any cryptographic function after 2030, and the recommendation is to migrate now. If your codebase still has SHA-1 anywhere near a security boundary, treat it as a 2030 deadline.

Practical SHA-256 collision for 31-step reduced variant (2024). Cryptographers at EUROCRYPT 2024 and Asiacrypt 2024 demonstrated collisions on SHA-256 reduced to 31 of its 64 rounds. The full 64-round algorithm remains secure, but the security margin is the thing that shrinks over decades. This doesn't mean SHA-256 is breaking. It means researchers are steadily climbing the round count, and when the climb reaches the full spec (historically a multi-decade journey), NIST will have a successor ready — which is partly why SHA-3 exists.

CVE-2024-54143 (OpenWrt). A real-world vulnerability in the OpenWrt Attended Sysupgrade Service where SHA-256 hashes were truncated to 12 characters. Truncation dropped effective collision resistance from 128 bits to 48 bits, which is feasible to attack. Attackers could poison the firmware build cache and deliver compromised images. The lesson: don't truncate cryptographic hashes unless you know what you're doing. If you don't need the full 256 bits, use SHA-224 or SHA-512/256 (both are properly-designed truncated variants), not sha256(...)[:12].

Lane 2: password hashing is a different animal

This is where the SHA-256 story ends and a separate story begins. If you take nothing else from this post, take this: never hash passwords with SHA-256.

General-purpose hashes are designed to be fast. That's a feature when you're checksumming a file (the faster the better) and a catastrophic bug when you're storing a password. If someone steals your user database, every SHA-256 hash in it is a starting pistol for a brute-force race. On a modern GPU, around 180 billion SHA-256 operations per second. On an 8-character alphanumeric password, that's minutes. On an even halfway-decent wordlist-plus-mutations attack, it's seconds for common passwords.

Password hashing algorithms exist because the defense is to make each hash slow on purpose. Not slow enough for a user to notice (100ms per login is invisible) but slow enough that an attacker with a stolen database faces years of compute instead of minutes.

Three modern choices, in order of OWASP's 2025 recommendation:

Argon2id (preferred)

Winner of the Password Hashing Competition in 2015, standardized in RFC 9106, recommended by OWASP as the first choice for new systems. Memory-hard (requires significant RAM to compute), which defeats GPU and ASIC attacks because GPUs have relatively little fast RAM per core.

Three parameters: memory cost (how much RAM), time cost (how many iterations), and parallelism. OWASP 2025 recommended minimums: 19 MiB memory, 2 iterations, 1 parallelism degree. You'll want to tune these to target ~200–500ms per hash on your production hardware.

// Node.js with argon2 package
import argon2 from "argon2";

const hash = await argon2.hash(password, {
  type: argon2.argon2id,
  memoryCost: 19456,   // 19 MiB
  timeCost: 2,
  parallelism: 1,
});

const valid = await argon2.verify(hash, candidatePassword);

bcrypt (battle-tested, still fine)

Designed in 1999 by Niels Provos and David Mazières. Uses the Blowfish cipher's expensive key schedule as the work function. Not memory-hard but has held up against two decades of hardware advances through its configurable cost parameter.

Default cost factor of 10 is too low in 2026. Use 12 for most production systems, 14 for high-value systems where slower login is acceptable.

# Python with bcrypt
import bcrypt

password = "correct horse battery staple".encode()
hashed = bcrypt.hashpw(password, bcrypt.gensalt(rounds=12))

# Verify
valid = bcrypt.checkpw(password, hashed)

bcrypt has a quirk: it caps input at 72 bytes. Longer passwords get silently truncated. This is fine for ordinary passwords but becomes a problem if you're hashing a pre-hashed credential (don't do that) or a long passphrase. For passphrases, Argon2id is the better choice.

scrypt (memory-hard, less popular)

Designed by Colin Percival in 2009. Memory-hard like Argon2, widely supported, slightly older. Still perfectly fine but Argon2 is the newer standard choice. Use scrypt if your stack has better scrypt support than Argon2.

PBKDF2 (compliance only)

Old. Not memory-hard. Still in the standards because FIPS compliance requires it in some contexts. If you have a choice, pick Argon2id or bcrypt. If you're stuck with PBKDF2 for regulatory reasons, use at least 600,000 iterations of SHA-256 (OWASP 2025 recommendation).

What to do with existing SHA-256 password hashes

You migrate. On next login, after verifying the old SHA-256 hash, immediately re-hash the plaintext password with Argon2id and store the new hash. Mark the user's row as migrated. After some number of months (or once login rates suggest most active users have rotated), force remaining un-migrated users to reset their passwords.

If you can't get the plaintext back (which you can only do if you never had it — e.g., you were hashing application-side before storing), you can wrap: hash the existing SHA-256 digest with Argon2id. The result is bcrypt/Argon2-protected and old clients that send plaintext can be verified transitionally. This is more complex but occasionally necessary.

For checking whether passwords themselves are reasonable before hashing, our password strength checker uses the same zxcvbn scoring Dropbox pioneered. Our password generator produces high-entropy passwords and passphrases when you need one quickly. For hands-on bcrypt experiments, the bcrypt generator does cost-tuned hashing in the browser.

Salt, pepper, and the practical rules

Even with a slow algorithm, you need salt. And you might want pepper.

Salt is random data added to each password before hashing so two users with the same password get different hashes. Without salt, an attacker can precompute a table of hashes for common passwords (a rainbow table) and match them against your database in bulk. Salt destroys this by making each hash unique. Modern algorithms (Argon2, bcrypt, scrypt) handle salt for you — the salt is part of the output string, not something you manage separately.

Pepper is an application-wide secret added to every password before hashing, stored outside the database (in a secrets manager, environment variable, HSM). If an attacker steals the database but not the pepper, the hashes are worthless because they'd need to brute-force against the correct pepper value for every guess. Pepper is defense in depth. Not every system needs it; high-value systems probably should.

A typical salted+peppered flow:

# Pseudo-code
pepper = load_from_secrets_manager()   # server-side secret
salt = argon2.generate_salt()           # per-password random

combined = password + pepper
hash = argon2.hash(combined, salt=salt)
# store: hash (includes salt), user_id
# pepper stays in secrets manager

On verification, do the same: combine the submitted password with the pepper, then call argon2.verify() against the stored hash.

The big pitfall: if you rotate the pepper, every existing hash becomes unverifiable. Plan for multiple versioned peppers so you can migrate, or commit to a pepper that doesn't rotate for the application's lifetime.

Lane 3: HMAC for authentication

The third lane. HMAC is technically built on a hash function but solves a different problem: proving a message came from someone holding a shared secret, without revealing the secret.

If you've ever verified a Stripe webhook, validated a GitHub webhook signature, or used AWS request signing, you've used HMAC. The signing party computes HMAC(secret, message) and sends the result as a signature. The verifying party recomputes the same HMAC with the same secret and checks they match. If they do, the message is authentic and hasn't been modified.

Specified in RFC 2104. The clever design prevents a class of attacks that would break the naive approach of hash(secret + message).

Why HMAC instead of hash(secret + message)?

With many hash algorithms (including MD5 and SHA-1), a length-extension attack lets an attacker who knows hash(secret + message) compute hash(secret + message + extra) for arbitrary extra, without knowing the secret. This breaks a naive "signed" message scheme.

HMAC is carefully designed to prevent this:

HMAC(key, message) = H((key XOR opad) || H((key XOR ipad) || message))

Where ipad and opad are specific constants, || is concatenation, and H is the underlying hash function. The double hashing with different pads kills the length-extension attack.

In practice, you never write this by hand. You use the crypto library's HMAC function.

Webhook signature verification, done right

Here's what Stripe's webhook verification looks like in production-ish code:

import { createHmac, timingSafeEqual } from "crypto";

function verifyStripeWebhook(payload, signatureHeader, secret) {
  // signatureHeader is something like "t=1492774577,v1=5257a869..."
  const parts = Object.fromEntries(
    signatureHeader.split(",").map((p) => p.split("="))
  );
  const timestamp = parts.t;
  const receivedSignature = parts.v1;

  // Prevent replay attacks: reject old timestamps
  const ageSeconds = Math.floor(Date.now() / 1000) - Number(timestamp);
  if (ageSeconds > 300) {
    throw new Error("webhook too old");
  }

  // Compute expected signature
  const signedPayload = `${timestamp}.${payload}`;
  const expectedSignature = createHmac("sha256", secret)
    .update(signedPayload)
    .digest("hex");

  // Constant-time comparison — critical
  const expectedBuf = Buffer.from(expectedSignature, "hex");
  const receivedBuf = Buffer.from(receivedSignature, "hex");
  if (expectedBuf.length !== receivedBuf.length) return false;
  return timingSafeEqual(expectedBuf, receivedBuf);
}

Three things to notice:

1. The timestamp check. Without it, an attacker who captures a valid webhook can replay it months later. Stripe, GitHub, and most webhook-sending systems include a timestamp specifically so receivers can enforce a replay window.

2. The signature is over timestamp.payload, not just payload. This is so the timestamp itself can't be tampered with. If you just signed the payload, an attacker could change the timestamp and the signature would still verify.

3. timingSafeEqual, not ===. This is the pitfall that breaks most webhook validators. A normal string comparison returns false as soon as it hits a mismatched character — so the time it takes to return false leaks which character was wrong. An attacker making many requests can measure tiny timing differences and gradually learn the expected signature. crypto.timingSafeEqual runs in constant time regardless of where the first mismatch is. Always use it for signature comparison.

If you're building webhook endpoints yourself, our HMAC generator is handy for sanity-checking signatures during development — paste your secret, your payload, and the algorithm, and verify that the HMAC your code produces matches.

HMAC in JWTs (briefly)

JWTs signed with HS256 are using HMAC-SHA256. The "signature" in a JWT is literally HMAC-SHA256(secret, base64url(header) + "." + base64url(payload)). The server that verifies the JWT does the same computation and checks the result matches the signature segment.

The important implication is that HS256 is symmetric — anyone with the verification key can also sign. That's fine for single-server apps. For distributed systems where different services might verify tokens, use RS256 or ES256 (public-key signing) instead. More on that in our JWT decoder guide.

Hash vs encryption — the confusion to unlearn

This gets confused constantly, so here's the one-table answer:

AspectHashingEncryption
Directionone-waytwo-way
Purposeprove integrityprotect confidentiality
Reversible?noyes (with the key)
Needs a key?no (except HMAC)yes
Output lengthfixedroughly matches input
Use casepasswords, file integrity, digital signatures, content IDssending secrets, storing sensitive data that must be decrypted later

Hash when you need to detect modification or verify something matches. Encrypt when you need to get the data back. Never use hashing to "hide" data you need to retrieve — you can't retrieve it. Never use encryption for passwords — you don't need the original back, and two-way functions are the wrong tool for one-way work.

File integrity in practice

The most common non-password use of hashing: verifying that a downloaded file is what the publisher intended. Every reputable software release publishes SHA-256 checksums next to the download links.

ubuntu-24.04.1-desktop-amd64.iso
SHA256: 3d6d0facce3c5dba55a50a42b7dfae6f3cdc9f51d1de79d637067a9e4e18664a

After downloading, recompute and compare:

# macOS, Linux
shasum -a 256 ubuntu-24.04.1-desktop-amd64.iso

# Windows PowerShell
Get-FileHash ubuntu-24.04.1-desktop-amd64.iso -Algorithm SHA256

If the output matches the published value, the file is intact. If it doesn't, something's wrong — a corrupted download, a tampered mirror, a hostile proxy, or a man-in-the-middle attack. A mismatch is rare but always worth investigating; don't "just try again" until it matches without asking why.

For serious distributions (OS ISOs, security-critical software), the checksums themselves are often signed with GPG. You verify the signature on the checksums file first, then verify the downloaded file's hash matches one in the signed list. That chains integrity from the publisher's key all the way to your disk.

Generating hashes in code

Every modern language ships with hash primitives. You should never write your own hash implementation.

Node.js

import { createHash, createHmac, timingSafeEqual } from "crypto";

// Basic hash
const sha256 = createHash("sha256")
  .update("the quick brown fox")
  .digest("hex");

// HMAC
const mac = createHmac("sha256", "shared-secret")
  .update("the message")
  .digest("hex");

// Hashing a file
import { createReadStream } from "fs";

function hashFile(path) {
  return new Promise((resolve, reject) => {
    const hash = createHash("sha256");
    createReadStream(path)
      .on("data", (chunk) => hash.update(chunk))
      .on("end", () => resolve(hash.digest("hex")))
      .on("error", reject);
  });
}

Python

import hashlib
import hmac
import secrets

# Basic hash
digest = hashlib.sha256(b"the quick brown fox").hexdigest()

# HMAC
mac = hmac.new(b"shared-secret", b"the message", hashlib.sha256).hexdigest()

# Timing-safe comparison
if hmac.compare_digest(expected_mac, received_mac):
    # signatures match
    pass

# Hashing a file
def hash_file(path):
    h = hashlib.sha256()
    with open(path, "rb") as f:
        for chunk in iter(lambda: f.read(65536), b""):
            h.update(chunk)
    return h.hexdigest()

# Password: Argon2id
from argon2 import PasswordHasher
ph = PasswordHasher(memory_cost=19456, time_cost=2, parallelism=1)
hashed = ph.hash("correct horse battery staple")
ph.verify(hashed, "correct horse battery staple")   # raises on mismatch

Go

import (
    "crypto/hmac"
    "crypto/sha256"
    "crypto/subtle"
    "encoding/hex"
    "io"
    "os"
)

// Basic hash
h := sha256.Sum256([]byte("the quick brown fox"))
hexStr := hex.EncodeToString(h[:])

// HMAC
mac := hmac.New(sha256.New, []byte("shared-secret"))
mac.Write([]byte("the message"))
macHex := hex.EncodeToString(mac.Sum(nil))

// Timing-safe comparison
if subtle.ConstantTimeCompare(expected, received) == 1 {
    // signatures match
}

// Hashing a file
func hashFile(path string) (string, error) {
    f, err := os.Open(path)
    if err != nil {
        return "", err
    }
    defer f.Close()
    h := sha256.New()
    if _, err := io.Copy(h, f); err != nil {
        return "", err
    }
    return hex.EncodeToString(h.Sum(nil)), nil
}

If you just need to hash a bit of text or a small file right now, skip the code. Our hash generator does it in the browser and runs client-side so sensitive data never leaves your machine.

FAQ

Can you reverse a SHA-256 hash?

No. Hash functions are mathematically one-way. Given a hash, the only way to find an input that produces it is to guess inputs and hash them until one matches. For a 256-bit hash, the search space is 2^256 — larger than the number of atoms in the observable universe. What attackers actually do is guess likely inputs (passwords from wordlists, common patterns) and hash them, which is why password hashing needs to be slow on purpose.

Is MD5 completely useless?

No. MD5 is catastrophically broken against adversaries (you can construct collisions deliberately) but still fine for non-adversarial use like cache keys, content addressing within trusted systems, or detecting accidental file corruption. The one rule: if someone might benefit from forging a hash match, don't use MD5.

Should I use MD5 or SHA-256 for file integrity?

SHA-256. Modern operating systems ship tools for both, and SHA-256 is the default when publishing software checksums today. MD5 is fine for detecting random corruption but any malicious intermediary can produce a matching MD5 for a modified file — which is the whole point of verifying a download.

Why is bcrypt used for passwords instead of SHA-256?

SHA-256 is fast — a GPU can compute about 180 billion SHA-256 hashes per second. That speed makes brute-forcing stolen password databases trivial. bcrypt (and Argon2 and scrypt) are intentionally slow, configurable to take hundreds of milliseconds per hash. That's invisible to legitimate users logging in but catastrophic for attackers trying billions of guesses.

What is a rainbow table?

A precomputed table of hashes for common passwords. Without salt, an attacker can compute SHA-256 of "password", "123456", "qwerty", and a billion others, then match those hashes against every user in a stolen database. Salt (unique per-password random data) makes this approach useless because the same password hashes to different values for different users.

How long is a SHA-256 hash?

256 bits = 32 bytes = 64 hexadecimal characters or 43 characters in base64. The length is always the same regardless of input size.

Can two different files have the same SHA-256?

Mathematically yes — there are infinitely many inputs mapping to 2^256 possible outputs, so collisions must exist. Practically, no. Finding a collision against the full 64-round SHA-256 has never been done and isn't within reach of current technology. If you observe two different files with identical SHA-256 hashes, the most likely explanation is that one file is a bit-for-bit copy of the other.

Is SHA-256 quantum-resistant?

Mostly. Grover's algorithm would halve the effective security of any symmetric primitive, so SHA-256 gives 128 bits of effective security against a sufficiently large quantum computer instead of 256. That's still well above the safety threshold. For ultra-long-term security against quantum adversaries, SHA-384 or SHA-512 are more comfortable margins. Post-quantum concerns mostly affect public-key cryptography (RSA, ECDSA) rather than symmetric hashes.

What's the difference between hash and HMAC?

A hash takes input and produces output. Anyone can compute it. HMAC takes input and a secret key and produces output. Only someone with the key can produce the correct HMAC. Hashes verify integrity; HMACs verify integrity and authenticity (the message came from someone with the key).

What's salt for, really?

Making sure two users with the same password don't have the same stored hash. Without salt, one cracked password cracks every account sharing it and lets attackers precompute rainbow tables. With unique per-password salt, each hash must be attacked individually.

Do I need to remember the salt?

Modern password hashing libraries (Argon2, bcrypt, scrypt) store the salt inside the hash output, encoded as part of the string. You don't manage it separately — you store the output, and the library extracts the salt when verifying.

What is pepper?

An application-wide secret added to every password before hashing, stored outside the database (in a secrets manager or HSM). If an attacker steals the database but not the pepper, the hashes are unusable. It's defense in depth; not every system needs it, but high-value systems benefit.

Why is it called "peppering"?

Because it pairs with salting. The analogy works if you squint at it.

Why shouldn't I compare hashes with ===?

Timing attacks. Normal string comparison returns false as soon as it finds a mismatched character, so the time it takes leaks information about where the mismatch is. An attacker making many requests can gradually learn the correct value. Use crypto.timingSafeEqual (Node), hmac.compare_digest (Python), or subtle.ConstantTimeCompare (Go) instead — they run in constant time regardless of where the mismatch is.

What's BLAKE3 and should I use it?

BLAKE3 is a modern hash function designed for speed — significantly faster than SHA-256 on large inputs because it parallelizes across CPU cores. Use it when you're hashing gigabytes and performance matters. For small payloads or API-level hashing, the speed difference doesn't matter and SHA-256 has broader library support.

Is there a successor to SHA-256 already?

SHA-3 was standardized in 2015 as a structurally different backup in case SHA-2 gets broken. It hasn't displaced SHA-256 in practice because SHA-256 remains secure. If future cryptanalysis reduces SHA-256's security, SHA-3 or a post-quantum hash family is ready to take over.

What's the fastest secure hash?

BLAKE3 in most benchmarks, especially on multi-core systems hashing large inputs. SHA-256 is fast enough for almost every practical use case. If your bottleneck is hashing throughput and you've proven it with a profiler, switch to BLAKE3. Otherwise stick with SHA-256.

One more thing

Three lanes, three recommendations for 2026:

  • General-purpose hashing: SHA-256 as the default, BLAKE3 if you're hashing a lot of data, MD5 only for non-security use where attackers can't benefit from collisions.
  • Password hashing: Argon2id is the OWASP-recommended choice, bcrypt at cost factor 12+ is still fine, never raw SHA-256.
  • Authentication (HMAC): HMAC-SHA256 with timestamp + timing-safe comparison for webhook validation and API signing.

Our hash generator computes hashes across MD5, SHA-1, SHA-256, SHA-512, and BLAKE families in the browser. Our HMAC generator does HMAC with the same. For passwords specifically, combine the password strength checker with real Argon2id or bcrypt code in your application — don't try to hash user credentials client-side.

If you're in the cryptographic neighborhood and working with tokens, our JWT decoder guide covers HS256's HMAC relationship. If you're choosing primary keys and wondering about v5 UUIDs (which use SHA-1 for deterministic hashing), the UUID v4 vs v7 guide has that story.

Hashes are three tools sharing a name. Use the right one for the job and your system quietly does the right thing. Mix them up and you ship the next "we hashed passwords with SHA-256" incident.

md5 vs sha256sha1 vs sha256is md5 secureis md5 brokenwhich hash algorithmbcrypt vs argon2hmac vs hashpassword hashing 2026blake3 vs sha256sha256 collisionhash vs encryptionfile integrity checksumwebhook signature verificationtiming safe comparisonargon2idhash generator
Advertisement

Related articles

All articles

Practice with free tools

200+ free developer tools that run in your browser.

Browse all tools →