Home/ Static Code Analysis Tools/ Code Climate Quality/ Reviews
Ensure high-quality codes with more test coverage
30%
40%
0%
10%
20%
I like the ease with which you can view the different warnings in code blocks. The explanations are also pretty easy to understand.
There are a fair number of false positives that flag code that is fine, but that seems inevitable and I have yet to find a tool that does better.
Like other software code review tools you can not take everything as gospel but instead you use Code Climate to identify places that deserve a closer, more skeptical look. It is especially useful if you have a large codebase and multiple developers, as this really helps enforce good coding practices.
We have a huge codebase and need to enforce best practices. Code Climate is a good tool for pointing out sections of code that need a closer look.
Code climate allows us to monitor and correct code quality issues before they hit production and become permanent. We also use it for test coverage which is a great metric of our ability to work and ship fast.
There are sometimes false positives but they are fairly easy to work around and dismiss.
It's part of our CI pipeline that helps ensure code quality is up to par.
I really like how code climate quality finds the code issues that I can use for refactoring, it leads to much higher code quality.
I do not like how aggressive it is sometimes with how it rates code, I end up having to ignore the warnings a lot of the time.
It helps me determine code that needs refactoring.
I am able to connect all of my repositories in one place for static analysis and get a good picture across all of my applications how well we are doing on quality.
There is both an option for configuring in the UI and via the codebase, though codebase is preferred if you have a lot of repositories that you want to get up and running quickly this is pretty easy to install.
They have some plugins that don't work like the Legal Compliance plugin doesn't work with newer code which is very frustrating.
The permissions aren't quite as granular as I would like either. Want I wanted to be able to do was require approval for overrides except a couple of them that I would consider more of a warning. Unfortunately it is an all or nothing.
Quality checks for code repositories. Security Requirements with security static analysis checks.
My favorite Code Climate feature is the test coverage reporting. Open source tools can easily provide total coverage metrics but the "diff-coverage" feature of Code Climate is really helpful to see what percentage of new code being added is not tested.
The GitHub browser extension is also very helpful in showing code smells and violations inline on the pull request.
The RuboCop check is inflexible in the sense that we are not allowed to use 3rd party gems. We're restricted to using "rubocop-rails" and "rubocop-rspec" which is a big limitation when we have shared rules and cops that we want to incorporate into all of the repositories in our organization.
The static checks performed by Code Climate are helpful in discovering a certain class of bugs and security vulnerabilities before the code is merged and deployed to production and also enforces code quality standards across the organization.
Adds tools that we cannot find elsewhere, like code duplication engine. Also reports on PRs, including coverage. Trends are nice, especially in the beginning.
Since shipping code climate productivity, this product is less updated. As an example, rubocop is way behind, I tend to have problems with the browser extension, no new feature since a long long time ago.
As an example, I'd like to have coverage reports on the summary page. It's nice to have them on github thanks to the extension, but it has limits:
- 25 files, then it stops
- does not show unmodified files that change coverage
- requires a lot of resources (not saying you can do better)
I don't get why it has to be on github and can't be on code climate.
Homogeneity of coding style, even for those who skip git hooks
ensuring new code coverage
code duplication detection
avoid comments on pull requests, so reviewers focus on important matters, such as architecture
Code Climate scans are fast and they have a nice UI. In theory, Code Climate's vision is really good.
In practice, the scans don't really work out as well as you would hope. Nine times out of ten, the "violations" that it finds are either wrong or flawed in some way.
We use it to try to maintain quality code.
It was amazing ~7 years ago. Even today, the browser extension to show code coverage on Github PRs is unmatched. The quality checks worked, and before they introduced "engines," CodeClimate even did a fantastic job of showing the /subjective/ quality of code.
It's been abandoned for years. In ~2020 I had an email exchange with support where I said "I'm fed up with things breaking all the time. Are you even investing in the product anymore?" They replied "we need some time to answer this," and then never replied again -- even after I followed up.
Abandoned here means that engines are often outdated or broken, and the value the product delivers hasn't improved in years.
Hard quality checks, like pass/fail linting. It no longer provides subjective quality measures: the letter-grades are simply "how many errors does this have."
I don't think CodeClimate Quality has any value
Every time CodeClimate complains its something worthless or a weird glitch in their system. I hate CodeClimate.
The only valuable check CodeClimate offers is the diff test coverage. Everything else is worthless to me.
Looking for the right SaaS
We can help you choose the best SaaS for your specific requirements. Our in-house experts will assist you with their hand-picked recommendations.
Want more customers?
Our experts will research about your product and list it on SaaSworthy for FREE.
- Really simple to use, can't remember when someone really had a question about its usage and function
- Seamless integration with github
On the trends section, maybe it would be good to have some explanation on how the data is gathered there; for example, I'm not completely sure the technical debt ratio in there is a realistic one for our context, so it would help to understand what does it mean in practice.
Ensuring our code follows good standards and practices regarding maintainability and test coverage.