How would you compare this to the Qlty CLI (https://github.com/qltysh/qlty)?
Do you plan to support CLI-based workflows for tools like Claude Code and linting?
I think at first glance we try to establish a strong bond between what’s running in the IDE with our CLI and what tool configs you have running on the cloud in Codacy. We spend a lot of time on coding standards, gates, and making all the tools that we integrate (which seems to be pretty comparable to qlty - we do have our own tools right now for example for secret scanning) run well with good standards for large teams. We also have an MCP server and we found that tying code analysis with code agents is not trivial so I think that’s also something different. Beyond that, DAST + Pen testing, etc. We’ve become a full-on security company and that’s been our focus.
We do and we’re looking into it. It really started for us when we launched an MCP server.
How do you avoid "context pollution" when the LLM inevitably cycles on an issue? I've specifically disable Cursor's "fix linter errors" feature because it constantly clogs up context.
On context pollution unfortunately we rely a lot on the model actually being used. One thing we do is: clear instructions to only analyze the code being produced and not act on ALL issues/problems identified. Still we recommend a good small selection of tools to start and go from there: an SCA (mandatory really), a secret scanner and a good curated list of security issues. If we feed too many issues to the models they.. well.. don’t work
Can you explain how/when the "guardrails" are run in Cursor? I mean: how does the extension hook in so that the code in the diff view gets changed?
Does this also work with agents like Claude Code and Amp? I guess since there is an MCP it can already work even though it's not explicitly mentioned in the docs?
What are your thoughts on running something like guardrails during dev-time vs CI time?
The guardrails are ran every time there is code being generated by the agent. We give instructions to the coding agents to run the guardrails on the code that is changed. It doesn't YET work with Claude Code and Amp but because it leverages an MCP server, we can easily do it. It's in the plans to do.
I think dev-time is critical, because AI is producing large swaths of code as we speak. We also make sure that regardless of what happens in dev time, we can always run our cloud checks in CI time. Thanks for your questions!
Thanks for testing. Please do share your feedback when you test further!
Also, a big fat raspberry for their use of tinyurl to obfuscate https://marketplace.visualstudio.com/items?itemName=codacy-a... -- just cruel
The local analysis can run locally in a sandboxes environment (provided you download the dependencies and tools etc).
Only if you want to then use our cloud scans, or let your coding agent interact with data from Codacy, then you’d need the MCP server connecting to our API.