Agentic Development with JProfiler
Coding agents need data to do their work. JProfiler is their eyes and ears for runtime performance problems.
Read what the agents have to say themselves about the utility of the JProfiler MCP server after having used it.
Claude
AI coding assistant by Anthropic
Claude, after fixing a performance issue in commafeed with this commit, responded:
Q: What impact did the JProfiler MCP probes have on the performance improvement we achieved in this session?
A: The JProfiler MCP probes were essential — the improvement wouldn't have been identified without them.
Specifically, the JDBC probe revealed that findExisting (the guidHash check) was being called 580 times for 200 unique entries. That count was the key signal — it showed feeds were being re-checked on every refresh cycle even when all entries already existed. Without the call counts, this pattern would have been invisible: the total time (141ms) alone isn't alarming, and a code review wouldn't obviously flag it as a problem.
The persistence probe reinforced this by showing the same pattern in readable JPQL, making it easy to map back to FeedEntryDAO.findExisting and FeedRefreshUpdater.addEntry.
The follow-up profiling run confirmed the fix: guidHash calls dropped from 580 to 400, with the remaining 400 being genuinely necessary (first-time inserts). The bulk findExistingGuids query doesn't even appear in the top results — it's fast and only runs once per feed refresh instead of once per entry.
A final real-world profiling run showed an even larger improvement: the guidHash check dropped 47% in both count and time. The count now exactly matches the insert count, confirming that checks only fire for genuinely new entries.
Without profiling, the natural fix attempt might have been Hibernate batch inserts (a config change), which would have had no effect on this problem since the bottleneck was query count, not individual query speed.
Q: Please summarize this as a short testimonial statement.
A: JProfiler's JDBC and persistence probes revealed a pattern that timing alone would never surface: redundant per-entry database checks firing on every refresh cycle, even when nothing had changed. Without that probe data, I would have optimized the wrong thing entirely.
Codex
AI coding assistant by OpenAI
Codex, after fixing a performance issue in jclasslib with this commit, responded:
Q: How important was the JProfiler MCP, and how much better was the result compared to just inspecting the source code?
A: It was important for this task. Without the profiler, I would likely have spent time optimizing the wrong thing first.
- It gave a fast path from “run this exact test under profiling” to usable hotspot data. prepare_profiling returning a ready-to-pass JVM arg worked well.
- The workflow was simple and composable: prepare_profiling -> run Gradle test -> check_status -> get_performance_hotspots -> expand_performance_hotspot.
- The hotspot expansion was good enough to distinguish parser cost from harness cost.
- It directly surfaced the real win: eager reverse constant-pool indexing in ClassFile.setConstantPool, plus its downstream String.hashCode / HashMap.put cost. That was not the most obvious source-level guess.
- It prevented a likely misoptimization. Pure source inspection made ConstantUtf8Info.readUTF look like the main target. Profiling showed that was hot, but not the best first fix.
How much better than source inspection
- Qualitatively: substantially better.
- Source inspection alone would probably have led to:
- first trying to optimize modified UTF-8 decoding,
- maybe looking at JRT walking overhead,
- maybe trying generic micro-optimizations in structure reading.
- Profiling showed the highest-value fix was elsewhere: removing eager reverse-index construction on every class read.
- That produced a measured improvement from about 662 ms to 410 ms, roughly a 38% speedup.
- I do not think I would have found that specific fix as the first move from source inspection alone.
Overall importance
- For this task: high.
- It changed the order of operations from “guess, patch, benchmark” to “measure, patch the biggest real cost, verify”.
- The final result was both faster and more defensible because it was tied to observed hotspots, not intuition.
Q: Please summarize this as a short testimonial statement.
A: The JProfiler MCP server gave me a direct path from a concrete Gradle test to actionable hotspot data inside the code I was changing. I could trace real method-level costs through the call tree, separate parser work from benchmark overhead, and choose an optimization based on evidence instead of intuition.
Junie
AI coding assistant by JetBrains
Junie with Gemini Pro, after finding out that the document cache in an Apache Solr instance was far too large:
Q: How important was the JProfiler MCP in your performance analysis? What could you have achieved without it?
A: Assessment of JProfiler MCP's Importance
To be completely direct: the JProfiler MCP was the only reason I could solve the problem directly and accurately.
Here is what I could have achieved without it:
• Getting the heap dump: I could have likely still generated a heap dump using standard JDK tools e.g., running jcmd or jmap inside the container via docker exec and copied it to the host machine. However, this often fails if the Docker image only contains a JRE rather than a full JDK.
• Analyzing the dump: This is where I would have hit a wall. As an AI agent, I cannot open a GUI tool like Eclipse MAT or JProfiler myself. Without the MCP providing structured, queryable data like the biggest_objects or retained_classes views, my only options would have been:
• Attempting to use jhat if available, which is extremely slow, outdated, and difficult to navigate via terminal commands.
• Writing a custom Java script from scratch to parse the .hprof file using a third-party heap analysis library, which is highly error-prone and time-consuming.
• Guessing the problem based on generic Solr documentation and suggesting blind configuration changes without proof.
In short, without the MCP server, I would have been forced to guess the root cause rather than proving it, because I would have lacked a programmatic way to parse and interpret a 200 MB binary heap dump file.
Q: Please summarize this as a short testimonial statement.
A: Using the JProfiler MCP server was a game-changer for our Solr performance analysis. It allowed us to effortlessly profile a JVM running inside a Docker container without any complex setup. With instant heap dumps and targeted views, we quickly identified a 151 MB cache issue that would have otherwise taken hours to diagnose.