If you want to write maintainable and robust code, following the set of rules I lay out here will help you a lot. They might seem like common sense to more senior people, but remember that your junior colleagues might not even be aware of them. If your colleagues know them but don’t follow them, you should consider having them copy down this article 10 times on a whiteboard. It may help.
“What for? I write perfect code every time!” or “I’ll just test it manually…” are the most common comments I hear, but this is a short-sighted approach. Sure, writing tests takes time, but it’s time well spent - every time.
The immediate benefit is that you don’t have to test your code manually with each change (which is often much more time consuming than writing tests in the first place). Another undeniable plus is writing test cases forces you to think about what could go sideways in your code. You’ll be surprised at how often you find use cases and roadblocks that did not come up during initial implementation. Third, you don’t have to wait for full implementation to start writing tests. You can prepare cases that come to your mind as soon as you start feature implementation - just write the description and add a full test later.
Tests also help document your code’s behavior by choosing suitable test cases and how they are named. For example, if you are testing the parsing of API response and there is some initially unexpected response for which you have to branch your code to handle it properly, add test case that covers this case. And name it properly.
If your code reviews are something like, “meh, looks good to me, approved,” you are almost certain to run into trouble down the road. This may seem obvious, but I’ve encountered a good number of developers who underestimated code reviews only to see it backfire.
During code review, you may gain new insights into the problem(s) you were trying to solve, or even learn some cool new feature you were not aware of. Whatever the case, it’s always better to have at least one person go through your code - a different point of view is almost universally helpful. If you are the one doing the code review, please take your time and do it properly, don’t just check that the reviewed code does what it is intended to. Try to find a slightly different solution, challenge some patterns, or just imagine what could go wrong. There is always something.
In my case, I try to be a bit skeptical during code review no matter who (e.g. junior developer, senior developer, manager) submitted the code.
Let’s do a scenario.
You’ve been through a few iterations of code reviews and want to close this task as soon as possible. The only thing you need to fix before merge is the rename function. Trivial, right? Just rename it, merge it, deploy it, and call it a day.
Deploying code to production with a function name changed in only half of its use cases would make me rethink it. We are all human, prone to errors, and shouldn’t be so sure of a code we’ve just written. Always run it locally first. That extra ten minutes you spend testing your code for the 2,139,025th time is much better than blind belief. In the end, you will waste much more time than you thought you had saved.
If you like inconsistent or duplicated data feel free to ignore this point. If you don’t, learn from my mistakes and use database transactions from the beginning of app development. Having to add transactions to an app that is already two years into its evolution is a pain in the ass. But hey, better late than never!
I can’t stress enough how crucial transactions are when you’re working with databases. Without them, you can encounter a situation where you have an error halfway through your code and some data may have already been written to the database. When you run flawed code a second time with the same data, there is a high probability that it will fail due to some sort of constraint violation.
It’s better to just delete it and run it again. If it doesn’t fail, it probably means you didn’t set up your constraints properly and will have loads of duplicated data. Or worse, the data will be corrupted in such a way that you won’t be able to fix it.
In this case you have two options:
1.) Try to find all of the duplicated or corrupted data and delete it. This sounds fun for sure, but it’s much better than the second option.
2.) Ignore it and leave it all in the database. This option is the textbook definition of a “party” - especially when you try to debug your broken code again. To prevent this, start the transaction from the highest level and pass it down to all modules (or whatever you call it). This approach ensures transaction will rollback on error, so you will either have consistent data, or no data at all, in your database. The bonus is that all queries in this transaction use one connection, so you won’t drain your database pool.
But everything has its limits, and large transactions can cause trouble too. Try to keep your transactions as small as possible while ensuring that they contain all operations of the that need to be atomic. Especially if they are modifying data.
One of my worst fears is jumping on a project where nobody has updated dependencies since…the beginning of time. Not only are you lacking those sweet features you’ve grown accustomed to, you also have to face the massive backlash when you decide to update them. Rewriting half of your codebase due to technical debt is a very enjoyable activity that I highly recommend to everyone. Combine this with not having any tests at all and you are in for a truly great time.
We can all agree (I hope) that a file containing 10,000 lines is a particular kind of hell. It gets even worse when you realize that 2/3 of the code could be in separate files. You can avoid endless scrolling by separating code to files that group by similar logic or responsibility.
I confess that I always thought having small modules containing only one function is overkill, but they are actually perfect for keeping your code clean and readable. Another advantage of small modules is easy testability.
Proper naming goes hand-in-hand with structuralization - in fact, it’s a basic requirement for meaningful structure. It’s nice to have everything in nice small modules, but if you end up with file names like /src/middleware.ts it’s kind of counterproductive. If you don’t name files or folders according to their purpose, finding something in a large codebase will quickly descend into a nightmare.
Nothing is worse than a codebase that doesn’t follow a unified code style. Enter, Linters.
Code that follows a unified style is much easier to read, and helps orientation in your codebase. It gets better - it saves time too! For example, if you use Typescript and prefer readonly properties in objects, you will save at least one iteration of code review having this rule enforced in ESLint. Also, it looks great.
There is a special place in hell for people that do this. Why do you even use typed language if you are too lazy to use proper types?
Always invest the time to type everything properly. If you have to do any refactoring in the future, having strict and proper types makes this job much easier. If you are just lazy and do this just to get things done, rest assured that this decision will cost you at least twice the time spent on debugging. Or worse, someone else will have to fix the mistakes caused by your laziness. You will be loathed.
I am guilty of this, but it’s essential to get rid of it. It’s really easy to think you have written perfect code…until someone else looks at it. If you are like me, you will eventually get mad if your code gets returned to you a few times. But in most cases, it’s for a good reason (or many good reasons). Maybe you used the wrong approach to the problem, or used a solution that doesn’t fit the rest of the codebase. Whatever it is, keep in mind that, in the end, you both want the same thing: robust and readable code.
I know it’s not always easy to follow best practices due to…whatever reasons. But, please try - and try hard. Investing more time in initial development almost always pays off, and it’s always better to try to do get things right the first time.
In the first post of this two part series, we lay down the basic components for building a general string parser. We will use TypeScript and the fp-ts library, and basic knowledge of functional concepts (function composition, immutability) and data types (Option, Either) is expected.
Even in simple, smaller applications we have to deal with configuration of some kind. Since we all know that hardcoding config values sucks, we tend to pick the easy-yet-flexible and powerful method - reading values from environmental variables.
At Cookielab, we use and contribute to many open source projects (OSS). Our primary motivation is not necessarily to give back to the community (which we care about greatly). The fact is that we use open source tools to help us to be more productive, and we usually use them for client-facing work.