Xoogler here, circa 2015. My reaction is that this is a very google3 (i.e. web services) centered book. But Google contains multitudes and it feels wrong to ignore them.
For example the book has a section called "How Code Review Works At Google." And it goes on to describe strictly the google3 process. But Chrome, ChromeOS, GoogleX, others have different processes. If Google has a proven model, why do so many of its projects deviate from it?
During my time there, the Android team was recruiting internally, advertising "come work on Android, we don't require Readability." It was seen as an internal competitive advantage to reject these processes! What does that say about how they are perceived internally?
I speculate that Android and Chrome and others have distinct processes for a good reason, and that the book is unknowingly slanted towards web-service style engineering.
I assume from capitalisation and context that Readability is some very specific and very strict set of rules? I’m curious what would be there that would be so offputting...
The readability process at Google is a way to gain the ability to submit code without a language style reviewer. To submit code into google3 (their main repo for server-side stuff), a changelist requires acceptance by a person or people who collectively own all of the code being changed, and a person or people who collectively "have readability" in all of the languages in the change. The simplest example is there is some C++ code that you own and you have C++ readability. In this case you need the review of anyone. If you don't have C++ readability, you need a reviewer who does have it, and they need to carefully review your change for style.
It used to be that you had to do a "readability review" which was a large change that you developed specifically to gain readability in some language. A style reviewer would approve it after (usually) many rounds of comments, and then thereafter you had C++ readability forever.
More recently some language teams have adopted incremental readability. In this process you add a language reviewer to all of your changes, and after you demonstrate that you know what you are doing, they will grant you readability. I think the Go team was the first group to adopt incremental readability, but it spread to other languages.
This process is off-putting to people who want to join Google but not be acculturated to their language style rules. There's a surprising number of these people. In mainline Google these people simply will not get bonuses, raises, and promotions. In defective organizations like Android they just discard Google code quality standards and it shows.
The way I've described it to people is that you need to get at least one review from someone who...
- is an Owner of the code
- has Readability in the language of the code
- is not you
If you are an owner and have readability, you just need someone who isn't you to sign off on it. If you don't have ownership or readability you need to get someone who has those things to sign off. If you can find someone who has both, great. Otherwise you'll need two people.
Having readability in a language can make you pretty desirable, particularly if it's in a language that your team doesn't often use. It's hard to get, and it takes time. My team is pure Java, but once in a long while we check in Python scripts. Most people have Java readability, but I'm one of two people on the team with Python, which makes me more valuable to the team.
It's really hard to get someone who isn't on your team to give you a readability review.
I’m pretty sure that will trigger reviewed-and-submitted code audits for any role that handles PII. Don’t want the HN-reading public to get the wrong impression.
This is probably a very dangerous book that a bunch of monkeys will make into a dogma. On the other hand, this one from Google is awesome http://streamingsystems.net/
Upvoted based on the authors (and will definitely be reading)...
Titus is both on the C++ standards committee and did a ton of work working on C++ style and core libraries when I was there.
Hyrum was the king of large-scale refactoring and had (IIRC) more than billion lines of code changed and millions of CLs submitted across all of google by the time I left.
I've listened to a few interviews with Titus Winters on CppCast and gained a tremendous amount of insight on C++ systems development and software engineering in general (as well as Google's approach, obviously). IIRC he is a fundamental part of the Abseil project and hence has a deep involvement in Google's interaction with the broader open source and C++ communities.
If those quick talks were any indication, there will be a lot of value in this book for experienced developers.
This book is about writing software at scale. E.g. in an org with thousands of people, who might be using your APIs or systems across multiple revisions for years to decades.
I feel like it's more geared for people that have worked in software for at least a decade and need to break bottlenecks in their org. Either as guidance, or as a sales pitch to show executive management "See Google does it."
If you’re interested in becoming a software engineer or are junior, do not read this and cargo cult it because many of these things will not be good practices for 99.99% of the industry.
Google has custom tooling, custom kernels, custom hardware, etc and legions of support at every layer of the stack. Complexity is not eschewed but it is rather embraced if it means some slight improvement in utilization or better looking latency distribution.
Those goals are not sustainable in normal businesses, even tech ones.
Google places very little emphasis on things like code readability and testability because it’s easy for them to do rewrites or just run tests on 1 million cores to shake out race conditions and locks.
Read this for entertainment but don’t try to blindly apply it to your work environment. Most of it will make you a bad engineer for non-FAANG envs.
> Google places very little emphasis on things like code readability and testability.
Why do you say this? I've seen the opposite.
I can't submit untested code or code that doesn't have "readability" approval. It's certainly not perfect but testability is a huge part of any project I've worked on.
The “readability” label comes from other “brilliant” Google engineers.
Also, tested code is not the same thing as “testable” code. Mock and monkey patching can very easily give you 100% coverage of untestable code with garbage tests.
> The “readability” label comes from other “brilliant” Google engineers.
This doesn't mean it leads to complex code.
The Google cpp style guide[0], for example, tries at length to keep code simple and stresses that even in C++, you should prefer simple things unless the performance benefits (of, say, nontrivial ownership) are provably necessary:
> The performance costs of value semantics are often overestimated, so the performance benefits of ownership transfer might not justify the readability and complexity costs.
> > The “readability” label comes from other “brilliant” Google engineers.
> This doesn't mean it leads to complex code.
It does mean that it’s a completely subjective thing coming from other people in the very same culture of which I’m being critical. It’s like someone claiming the Chinese government isn’t authoritarian because Chinese citizens have slapped on the “non-authoritarian” label.
Chiming in here to agree with the others. In particular, finding simple, understandable solutions to complex problems is highly valued at Google. Complex solutions are an anti-pattern.
And the comment about readability and testability is so far different from my experience that I have to wonder where the poster is getting their information. Google code reviews have to be among the most nit-picky I've ever seen in a 30+ year career, with serious attention to detail all the way down to punctuation in comments.
Nit-picky reviews are a smell, not a good thing. It can often mean that the reviewer is not really reviewing the big picture (is the intent of the change being fulfilled, is this the right place in the architecture, etc) and it making up for it by focusing on irrelevant details.
Seriously, code review on punctuation in a comment is not good in any scenario and it speaks nothing to the legibility of the code itself. If anything, it might mean the code is unreadable and the reviewer is too embarrassed to point out that they can’t follow it.
This conclusion doesn't, at all, follow from the premise.
While yes, nit-picky review could mean the reviewer doesn't understand the big picture, an LGTM with no comments at all is more likely to be indicative of missing something.
What you really mean is that nit-picky reviews that don't pick up on actual logic or functionality issues are a smell. Which is true, but isn't usually the case at Google[0]. The chapter on teamwork covers some relevant concepts (like psychological safety).
You seem to be assuming, without prompt, that nit-picking punctuation comes at the expense of a thorough overall review.
Code review on punctuation is absolutely an important part of readability. Consistency makes understanding and scanning (for human readability) and potentially parsing and modifying (for machine readability) much easier.
As someone who both reviews a lot of code and reads a lot of code, consistent documentation with good grammar and punctuation is enormously helpful.
I’m not assuming, this is based on my experience at Google. People that focused on punctuation rarely provided anything more than the superficial review.
People who focus on punctuation do so at the expense of everything else, sure. But people who give thorough reviews give thorough reviews, including nitpicky comments.
I've given and received thousands of reviews at Google (in multiple languages, across multiple teams), and can say that my experience has been fairly consistent: code review is high quality and detail oriented, including, but not at all limited to, small things like grammar and comment formatting.
I’m not begging the question, I’m just sharing my experience from working at Google and other companies. By a wide margin, the most useless reviews were ones nit-picking formatting, punctuation, etc.
> But people who give thorough reviews give thorough reviews
Ok.
> including nitpicky comments.
Disagree. Pointing out that someone forgot to capitalize an initialism in a comment is not thorough, it’s a time suck. A thorough and competent reviewer will realize it doesn’t matter and will know not to distract the CL owner will such irrelevant trash.
Sure, there is an a small section of the Venn diagram that leaves good review comments and sprinkles in “nit:” turds, but that group is much smaller than both the fully incompetent nit-pick group and the competent reviewer group.
Seriously, unless a code comment is illegible or it’s being compiled into user-facing docs, it’s absolutely useless to leave something like “nit: it’s TCP, not tcp”.
> Seriously, unless a code comment is illegible or it’s being compiled into user-facing docs, it’s absolutely useless to leave something like “nit: it’s TCP, not tcp”.
I think you're describing style errors, not grammar errors. For example, the Google style guide says that (certain) comments should be complete sentences. Asking people to use good grammar (and capitalize/punctuate) those sentences as though they were documentation (because they are!) is valuable for reviewers.
I say this as someone who, when reading unfamiliar code, is able to understand it more readily if the inline documentation is easy to read. "tcp" vs "TCP" doesn't impact that, but other grammatical nits absolutely due.
> I think you're describing style errors, not grammar errors.
The entire thread (not started by me) is about nit-picking punctuation and now you’re trying to explain to me that I’m describing style errors. So what? It’s useless feedback regardless of which chapter in the Holy Google Docs covers it.
> Asking people to use good grammar (and capitalize/punctuate) those sentences as though they were documentation (because they are!) is valuable for reviewers.
Right, it’s valuable for the reviewers because it makes it seem like they provided feedback when they have nothing of substance to add. It’s not valuable for the author though nor is it realistically valuable to future people reading the code in question.
>I say this as someone who, when reading unfamiliar code, is able to understand it more readily if the inline documentation is easy to read.
My experience differs. Consistent and well formed prose in docstrings and comments absolutely helps reason about unfamiliar code. Nitpicking for good formatting, spelling, and grammar is valuable in code comments just the same as in documentation or any other form of technical writing.
If the comments aren't important enough to spend care and thought on, they should be deleted.
> If it has a meaningful impact on the ability to read it, it’s not a nitpick.
This is boring semantics. Given what you describe as nitpicks, we draw the line of meaningful impact on readability in different places. So I'm claiming that some of the things you describe as nitpicks in fact aren't.
The closest I've seen to this is comments of the form "can you reformat this" or "this doesn't match the rest of the file". Google's style guides have fairly strong guidance on formatting, so there's usually a "right" way (enforced by a linter), and if not, then usually you defer to the existing style in the module.
I meant in general.
Except python-like languages that make it a non-issue, in C/C++/Java I think people attach way too much importance to spacing, trailing whitespace, tab vs spaces, and column width.
I would agree with linting or just deferring to existing style, but in truth I care little about this, it's just not significant to me (yet others seem to overvalue it, to the point of talking exclusively about this).
At Google (in google3) there are automatic linters that run on every change and if they fail then your change simply will not be approved for style. There's no debate about it. This is one of the great things about gofmt: either your Go program passed through it, or it did not and won;'t be accepted. Arguments about the formatting need to be in the form of a changelist against gofmt itself.
I agree, automating this so as to make it a non-issue is really the only way to deal with this at scale (i.e. in presence of a lot of change and many reviewers).
Every team I've been on at Google has presubmits that enforce this, along with command line tools to "fix" it. There's no arguments about spacing. Nobody really cares about extra lines, but if you use tabs instead of spaces you're just wrong and your CL can't be submitted.
It's actually nice. I disagree with some of the rules and would like them to change. I think that if your if-block has a single statement and there's no else, and it all fits in a line then it's fine to inline it.
if (theAnswerIsKnown) return theAnswer;
But I'm wrong. Because the linter doesn't allow it. So I moved on and I don't bother fighting that fight.
Edit: To be clear, I might be right, and this might be better, but I'm "wrong" in the sense that it's not linter compliant.
w.r.t "if you use tabs instead of spaces you're just wrong", can you expand on that?
Do you mean 'wrong' in the sense of 'wrong according to the rules enforced by the linter' or 'wrong philosophically' ? If it's the latter, how is that wrong and is not even a subject for debate?
This is an honest question, I guess I just don't see how something like this can "just" be wrong (to me it's irrelevant and not worth the ink used to write such statements in code reviews... though I'm playing devil's advocate here obviously :-) ).
PS: of course if pre-submit checks enforce the rules, then as you said it becomes a non-issue. I have
been in environments where there are no such tool-enforced checks, yet people comment on it endlessly, routinely blocking code reviews for whitespace issues...
> Google places very little emphasis on things like code readability and testability because it’s easy for them to do rewrites or just run tests on 1 million cores to shake out race conditions and locks.
This is complete nonsense. Google's code review and testing and reliability practices are better and more sensible than most big and small companies I have worked for and heard about.
I think you have a point here, but it was not well articulated in your post. Much of the industry doesn't focus on software engineering only programming which leads to massive unmaintainable garbage codebases. These are the same people that want to save a few dollars by outsourcing the work to low cost places like India and are too ignorant to realize that the development takes 10x as long for a product that is unusable. My best advice would be to avoid these organizations as they will never value the engineering and only seem to focus on the lines of code output as a metric of productivity. Unfortunately this is a lot of the "industry," however I believe the 99.9% is a bit of an exaggeration. As software engineers, we should find value this sort of information regardless of our current employers opinion and hope that we can land at an institution that values engineering.
Their Google status is irrelevant. The nature of their comment is that there's a propensity to emulate big tech when it's not needed. Off the top of my head: Kubernetes, microservices. My citation is news.ycombinator.com.
How is it not? Seems like simple economics to me. Something like maintaining a custom kernel distribution takes ${x} amount of engineer time for ${y} amount of “savings” (better perf, custom patches, nonstandard features, whatever). Why bother until you have a fleet of machines large enough for y to be greater than x?
I don't have numbers but I feel this is exaggerated. Most people work on pretty normal stuff.
The biggest proprietary sink for most people is stuff like the build system/tooling/etc. Not many people have to be in a role where you're micro-optimizing something. They certainly exist, but not in the numbers some people on here seem to think.
The opening chapter is very clear on that point: this is a scouting report of a software ecosystem that works at scale, what things we think are a good idea generally. It is not a "This is how you should do it," in that Google's requirements are a few orders of magnitude different than others (expected code lifetime, codebase size, engineering population, compute requirements).
Thank you for the link, though I strongly prefer to know full content structure information, i.e, all relevant sections' page numbers. This allows me to assess how much relative attention is given to a particular topic. I believe that a full table of contents should be a mandatory element in visual book previews for all book publishers and sellers (for outlets lacking visual previews, this information could be provided in text format).
For example the book has a section called "How Code Review Works At Google." And it goes on to describe strictly the google3 process. But Chrome, ChromeOS, GoogleX, others have different processes. If Google has a proven model, why do so many of its projects deviate from it?
During my time there, the Android team was recruiting internally, advertising "come work on Android, we don't require Readability." It was seen as an internal competitive advantage to reject these processes! What does that say about how they are perceived internally?
I speculate that Android and Chrome and others have distinct processes for a good reason, and that the book is unknowingly slanted towards web-service style engineering.