How I Doubled jclasslib's Performance with JProfiler MCP
Recently, I improved the performance of jclasslib, my open-source JVM bytecode viewer, by using Codex and a profiling agent. Not because I had a particular pain point, but because I was trying out our MCP server that shipped with JProfiler 16.1. Read on for how the agent more than doubled class file reading, and why profiling no longer needs a reason to start.
The Hotspot Claude Would Never Have Found by Reading the Code
CommaFeed is my daily RSS reader. It is open source, and its creator runs a free public instance at commafeed.com. Every time I refresh my feeds, someone else's server does the work. So when I wondered where all that work actually goes, I pointed Claude Code and the JProfiler MCP at it.
As a result, a database optimization was merged into CommaFeed. The fix itself was not complicated, but Claude would never have come up with it from the source code alone. I asked afterward, and it confirmed as much: its natural first attempt would have fixed the wrong thing. Read on for how Claude Code, connected to the JProfiler MCP server, took me from curiosity to a merged PR.
JProfiler 16.1: AI Agents Can Now Profile Your Java Applications
Since JProfiler 1.0, profiling has been something a developer does with a GUI. You start a session, navigate views, and interpret data. In practice, this means that performance analysis only happens when a problem is serious enough to justify the context switch, and many issues ship to production without ever being profiled.
With JProfiler 16.1, we are changing how profiling works: AI coding agents can now profile your Java applications, analyze the results, and act on them. This is the biggest change in how developers use profiling since we started.
The new JProfiler MCP server, available as the @ej-technologies/jprofiler-mcp npm package, exposes JProfiler's
profiling and heap analysis capabilities through the
Model Context Protocol. It works with Claude Code, Cursor, Codex,
Gemini CLI and any other MCP-compatible AI coding tool.
Breaking the LLM Black Box: Custom Categorization in JProfiler
Modern agentic applications often perform a wide range of logical tasks through a single interface. While these operations have vastly different performance and cost profiles, they appear as an undifferentiated call tree in traditional profilers.
By default, JProfiler groups these requests by model name, which provides a high-level overview but lacks the granularity needed to distinguish between different functional workloads.
This screencast shows how to move beyond this predefined perspective with scripts that define your own categorization rules based on internal metadata, instruction structure, or application state.
Profiling AI: LangChain4j and Spring AI
Agentic applications introduce unique profiling challenges. Beyond standard CPU usage, performance and costs are determined by the complex structure of prompts, RAG retrievers, and tool calls that remain hidden behind framework abstractions.
To avoid vendor lock-in, most developers use frameworks like LangChain4j or Spring AI. JProfiler’s AI probe provides deep visibility into these frameworks.
This screencast walks through profiling a LangChain4j customer support agent, showing how the AI probe visualizes prompt compositions, isolates token-heavy outliers, and projects resource costs directly onto the recorded call tree.
JProfiler 16: Profiling Agentic Java Applications
Why AI Needs Profiling
Traditional profiling focuses on the JVM's internal execution, like method durations, memory allocation, and thread synchronization issues. One of JProfiler's main innovations of the past is grounded in its probes: Measurements of higher-level systems, like HTTP, JDBC, and RPC calls. With LLM frameworks like LangChain4j and Spring AI, a new performance challenge has emerged. LLM interactions introduce highly non-deterministic latency and substantial resource costs that standard CPU profiling cannot put into context. JProfiler is in a unique position to bridge this gap by treating AI interactions as a data source for a new probe.
Migrating to install4j 12
In most cases, migrating to install4j 12 usually just involves opening and saving your project with the install4j 12 IDE. Nevertheless, there are some considerations with respect to backward compatibility and some behavioral changes.
With Temurin 24.0.2, Adoptium JDKs can again be modularized by install4j
This year we had some dramatic moments starting in April with the release of Temurin 24.0.0 from Adoptium, our default JDK bundle provider. In this blog post we celebrate the happy conclusion of this incident.
The power of async tracking in JVM profiling
Async operations can speed up applications and improve responsiveness, but they also introduce complexity. Especially in the context of profiling, understanding what really happened and why can be surprisingly tricky. This post shows how JProfiler's async tracking feature helps fix hard performance problems in your application.
Website refresh: Visual updates, dark mode, and semantic search for docs
We have just rolled out significant changes to our website. They include many visual updates and important infrastructure updates that speed up loading times in many locations across the globe.
In addition, there are three functional changes that we would like to highlight: