Free Online Log Explorer
About this tool
This log explorer parses, searches, filters, and aggregates application logs directly in your browser - no uploads, no accounts, no log ingestion pipeline required. It handles three common log formats: NDJSON/JSON Lines (one JSON object per line, used by Node.js pino, Bunyan, Logstash, and most cloud services), key=value structured text (used by Logfmt, some Apache-style logs), and plain syslog-style text. Recognized fields - time, level, source, and message - are normalized into a unified structure, then displayed in a sortable results table with level-based color coding. Query syntax supports free-text search, field:value field matching, AND/OR logic, and optional time range filters.
Real example
Input (NDJSON log from a Node.js service):
{"time":"2024-03-15T14:22:01Z","level":"info","service":"api-gateway","msg":"request received","path":"/users/42","ms":12}
{"time":"2024-03-15T14:22:03Z","level":"error","service":"auth","msg":"token validation failed","userId":"u-991"}
{"time":"2024-03-15T14:22:05Z","level":"warn","service":"db","msg":"slow query detected","ms":2400,"table":"sessions"}
{"time":"2024-03-15T14:22:06Z","level":"error","service":"api-gateway","msg":"upstream timeout","path":"/payments"}
{"time":"2024-03-15T14:22:08Z","level":"info","service":"api-gateway","msg":"request received","path":"/health","ms":1}
Query: level:error - filters to 2 events (token validation failed, upstream timeout).
Query: service:api-gateway AND level:error - narrows to 1 event (upstream timeout).
The aggregates panel shows: error=2, warn=1, info=2. The per-minute bucket chart shows all 5 events landing in the same minute, making it easy to identify the incident window before exporting the filtered set as CSV for an incident report.
Common use cases
- Production incident triage: During an outage, download a log file from your server or cloud console and drop it here. Use
level:errorto surface all error events, then narrow bysource:or time range to identify the first error in the chain. Export the filtered results as JSON for the incident ticket. - NDJSON log analysis from cloud services: AWS CloudWatch Logs Insights, Google Cloud Logging, and Datadog all support NDJSON export. Paste or drop an exported log file here to query it without needing to re-ingest it or write a query in the cloud console.
- Finding slow queries or high-latency events: If your log format includes a numeric latency field (e.g.,
msorduration), use the field query to filter for lines containing specific services or endpoints, then visually scan the message column for latency values or error context. - Pre-deployment log review: Before deploying to production, run your application locally, capture its log output, and drop it here to verify that expected log events are present and no unexpected errors appear - without setting up a full logging stack.
How it works
Parsing runs in a Web Worker to avoid blocking the UI for large files. Lines are processed one at a time. If a line begins with { and ends with }, it is parsed as JSON and fields are normalized: time/timestamp/ts/@timestamp -> time, level/severity/lvl -> level, source/service/app/logger -> source, msg/message/text/body/log -> message. Lines matching key=value patterns (e.g., level=info msg="request received") are parsed similarly. Unrecognized lines are kept as raw text and searchable as free text. Queries are evaluated as a simple AND/OR tree of terms, where field:value terms match against normalized fields and plain terms match anywhere in the raw line.
Common mistakes
- Multi-line log entries not parsed correctly: This tool processes one line at a time. If your logs contain stack traces or multi-line messages (Java exception stack traces, Python tracebacks), the continuation lines will appear as separate unstructured entries. The first line of the exception is typically parseable, but the stack trace lines will appear as raw text. Filter for the exception message on the first line to find the event.
- Time filters require parseable timestamps: Time range filters only apply to events where a timestamp was successfully parsed. If your log uses a non-standard timestamp format (e.g., seconds since epoch as a plain integer rather than an ISO string), the time field may not parse and the event will be excluded from time-filtered results while still appearing in text searches.
- AND is implicit between multiple terms: Entering
error authis treated aserror AND auth- both terms must be present. To match either term, use expliciterror OR auth. This is the opposite of some full-text search engines (like Elasticsearch default behavior) where multiple terms produce OR results.
FAQ
Does this upload my logs anywhere
No. Parsing, searching, filtering, and export all run in a browser Web Worker - a local JavaScript thread. Your log data never leaves your device. This makes the tool safe to use with production logs that contain PII, authentication tokens, or internal service names.
What NDJSON field names are recognized automatically
The parser recognizes common field name variants: time (time, timestamp, ts, @timestamp), level (level, severity, lvl), source (source, service, app, logger, host, component), and message (msg, message, text, body, log). Fields that don't match these patterns are preserved in the raw JSON and remain searchable via free text.
How large a log file can I analyze
The tool uses a streaming Web Worker and can handle files with hundreds of thousands of lines. Performance depends on browser memory - files up to ~50MB typically work without issues. For very large files, use the time range filter to constrain the loaded result set before running queries.
Can I export filtered results
Yes. After running a query, use the Export JSON or Export CSV button to download the filtered event set. The CSV export includes time, level, source, and message columns and can be opened directly in Excel or Google Sheets for sharing with non-technical stakeholders during incident review.
Related tools
- Regex Tester — build and test extraction patterns for your log line format
- JSON Formatter — pretty-print structured JSON log entries for easier reading
- Diff Checker — compare log snapshots taken before and after a deployment
- UNIX Timestamp Converter — decode epoch timestamps embedded in log events