Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The security hole in sudo (sophos.com)
50 points by casca on May 23, 2012 | hide | past | favorite | 75 comments


I've always felt like the switch statement's "break" behavior should have been the implicit case, with a statement to trigger fallthrough if you really wanted it. I wonder what percentage of the time fallthrough is actually desired in switch statements. I would assume it's wildly in the minority.


Makes total sense when you remember C's assembly roots. On most architectures including the PDP-11, the program counter just moves to the next instruction. To make it do anything different, you need a jump instruction, and that's what the "break" means.

To a sibling comment: C# does permit fallthrough, just with ugly syntax. You can "goto case 2". http://stackoverflow.com/questions/174155/switch-statement-f...


Douglas Crockford said in one of his talk on Javascript (www.yuiblog.com/crockford/) that the syntax came from Fortran. And indeed it's because assembly did the fall through thing...


I agree. In Go break is the default, and there's an explicit 'fallthrough' statement.


Go was not designed to be portable assembler. break directly maps to a jump statement in assembler. fallthrough does not. Since C is designed to be portable asembler, it makes sense to include break as a keyword instead of a fallthrough keyword (and break would still be needed anyway for for loops and while loops).


C# doesn't have fallthrough at all, but it won't compile without "break;". I'm really not sure what they were going for there.


You can "goto" any case label in C#, which works like fallthrough but is more flexible (since you can jump anywhere). At the same time, it avoids the refactoring hazard associated with fallthrough, in which changing the order of the switch labels can change the behavior of the code. Of the languages that have "switch" (not counting ML/Haskell-style pattern matching here), C#'s approach seems ideal.


The C# approach sounds like a recipe for abuse:

    switch(something)
    {
        case 1:
            // Do stuff
            goto case 4;
        case 2:
            // Do stuff
            goto case 1;
        case 3:
            // Do stuff
            goto case 5;
        case 4:
            // Do stuff
            goto case 3;
        case 5:
            // Do stuff
            break;
    }


That's quite horrible! Fortunately, it's obviously horrible, and since "goto" has such a stigma attached to it, programmers will only use it when it's necessary. (Cross your fingers.)


register_globals would like to have a word with you.


I agree that removing implicit fallthrough is good, since it's a 90%/10% feature, but my problem is that the "break;" is meaningless extra syntax, since the lack of it generates a compiler error.

C# already implies no implicit fallthrough, so there's no need to explicitly state that you're not falling through. A better solution (for every language, really) would be a "fallthrough;" statement or "goto case x;", which C# does get right.


So Java and C++ programmers don't panic and think the whole thing will fall through.


    switch (v) {
    case FOO_A:
    case FOO_B:
      do some_foo();
      break; // if you really want :P
    case BAR_X:
    case BAR_Y:
      do_some_bar();
    }
Here you have two implicit fallthroughs used for cleaner code.

Another use is to express one case as a subset or superset of another case:

    switch (v) {
    case FOO_BAR:
       do_some_foo();
    case BAR:
       do_some_bar();
    /*...*/
    }
IIRC, there was some C-like programming language out there (C# probably?) which only provided default fallthrough if there was no code between the cases, as in the above example.


You're defending against a different claim. The idea is not that C's current way of doing things has no advantages, the idea is that it's not the optimal way. Pointing out a case in which it happens to initially be more helpful is basically a no-op. And I can think of better ways to spell even what you have there. Consider a "cases FOO_A, FOO_B" construct or something that both makes explicit what you are trying to do and still retains default break behavior. Fewer characters, clearer semantics, not hard to format for easy reading. C's way is definitely not optimal even on its own terms.


The missing break has been the sort of bug that haunts me at least twice a month at my current job.

We do car diagnostics and we code in C. Endless time I would not understand why a particular path was not taken, and a missing break was the answer.

Now, I can see the utility of fall-thru switch construct, but this is a kind of bug that could be avoided at the syntax level. Just don't require a break and develop another syntax for fall-thru cases.


@klez

What exactly do you do? The reason I ask is because 50 hours a week I'm an Auto Tech. The remaining hours that I'm awake I code. This week my project has been reverse engineering a pc based scanner that I have to learn about how it interacts with OBDII / CAN in an attempt to make a super tool that gives me specific abilities. The people I deal with during day know nothing of computers and the people I deal with at night know nothing of cars.

You are the first person that I have ever heard mention both in the same post.


This is pretty off-topic from the OP, but I want to point out that the ISO standard for OBD-II is a fantastic reference for this kind of project. We built an OBD-II reader for our engineering capstone project, and that ISO standard was an invaluable resource for getting things to work.

If there's a university nearby, you might want to go check out their engineering library and see if they have a copy. I don't recall the ISO number off the top of my head. If you're really serious about the project, I think the book only costs around $400 or so.


Most of the "super abilities" provided by high-end scan tool / ECU flash packages work by using manufacturer-specific data sent over the CAN bus, not OBD-II itself.

Some newer and higher-end cars even use proprietary crypto keyed off of the VIN number and a shared secret to protect from access by third-party tools. There's a good-sized cottage industry in breaking proprietary ECU protocols and crypto, because there's a solid niche market in selling mechanics one $20,000 "multiflash" tool to replace their ten $10000 manufacturer-provided "diagnostic systems," especially in the high-end exotic market.

I was able to find enough resource online to build an ISO OBD-II scanner as my seventh-grade final project, without needing to resort to hunting down the standard. Today there are even open-source projects like http://en.wikipedia.org/wiki/OBDuino which can provide a solid code-based understanding of the three old OBD signaling types (PWM, VPW, ISO), as well as the basic protocol.


I'll let you all get back to topic at hand. I have to say thanks though. I never imagined getting so much helpful info. The super ability I seek seems like it would be in a less secure area because even cheap hardware store tools can do it. I am trying to remove, relocate, or hide fault codes without resetting the emissions monitors. That way as a hush hush marketing ploy of my business I can truly "guarantee to pass" and depending on the solution possibly guarantee to never fail again.

I hit an idea slump in my normal utility making and decided to put my otherwise wasted time to use.

Thanks Again ~


Of all the car I worked on (mostly Mitsubishi and Ssangyong) very few of them had non-standard procedures for erasing DTCs.

I don't know what you mean by 'erasing the emission monitor', unless you mean the freeze frame, which usually gets erased when you erase DTCs with the standard 0x14 request.

Anyway I work exactly as you do, by reverse-engineering official tools.

And I'm not very knowledgeable about cars anyway :)


Or just don't use switch. If it's a small expression, an if/else chain takes no more lines of code and is not less efficient. If it's a large expression, it might be better expressed with a function pointer table.

I'm not a zealot on this stuff, and write switch statements regularly (though I do strongly prefer if chains for most stuff) but if you're really hitting this regularly I think a change in process might be in order.


It can be slightly less efficient when (on supporting architectures) the compiler could automatically generate a jump table in place of stacked conditionals, which it cannot do for a chain of ifs.


This is a myth that won't die. Jump tables can't be branch predicted, so they're generally significantly slower unless the number of cases is very high. And it's not true that the compiler "cannot" emit a jump table for an if chain anyway. A bunch of if/else-if expressions of the form "some_var == constant" can certainly be detected (though again, it probably isn't).


Unfortunately, I'm not able to decide to do this.

Most of our code is machine-generated, and our boss doesn't want to touch the generator in general.


Always close your stuff before filling it. E.g. instead of typing ...

switch (uint8_t bla){ case 0: stuff; ...

... and filling everything as you go along, first type the skeleton ...

switch () {case: break; default: break;}

... and then fill everything in. It's anal/pedantic but it becomes second nature really fast. Do it for everything (loops, structs, functions,...).

I once lost 2 entire days of my life I kid you not hunting an if (a = b) that had to be a == b and decided to change my approach.


I can empathize. I've always felt that implementation of switch statement in C is a bomb waiting to explode.

One of the many good decisions that went into C# is reversing how it works; in other words making it compulsory to explicitly state when you need control to fall from one case to another.


If you're having this kind of problem a lot, now might be an excellent time to start using a lint tool in your development process. It'll catch these things.


Long story short, coding standard and tooling are very poor at my current job.

I'm leaving in a week, so it won't be a problem for me anymore...


Yet another avoidable bug.

Over time, I've come to believe C is deadly flawed, and greenfield projects should seriously investigate languages that are better designed.

Trivial errors in C like dangling elses, fallthroughs in switches, and easy buffer-overruns by default say to me that C's broken, has been broken, and will continue being broken.

For non-realtime apps, I would recommend pretty much anything but C; for realtime apps, I would be looking into things like SystemC or D and choose appropriately.

This class of avoidable bug should not be happening in 2012.


C does have issues, but lots of people are complaining about things that are easily solved by using the proper tools. Turn on compiler warnings and/or use a lint tool.

(For reference, a project I have at hand is compiled with -std=c99 -pedantic -W -Wall -Wno-sign-compare -Wno-unused-parameter -Wbad-function-cast -Wcast-align -Wchar-subscripts -Wfloat-equal -Wmissing-declarations -Wmissing-format-attribute -Wmissing-noreturn -Wmissing-prototypes -Wnested-externs -Wpointer-arith -Wshadow -Wstrict-prototypes -Wwrite-strings -Wundef -Werror and uses lint to check for improper conversion between types.)


So it's not broken, only broken by default. Because having to add three lines of command line flags to get sane behavior doesn't seem like a good default to me.


Truthfully, C does what you tell it to do. As it turns out, most people don't want it to do what they tell it to do, they want it to do what is in their head. Those warnings are "did you really mean what you just told me to do?". For some people, the answer is "no, what, wait!" -- those are what they are for. Not that the behavior is insane, just "less common". I don't think people fully grasp the idea that C does what you tell it to do when they first start using it.

If you told C "make me a sandwich", it'd give a warning about not having enough mustard to cover your entire body and an error about not have a large enough bun type to fit you in.


This is facile; there is no good reason why C couldn't have a less-dangerous switch statement, or why compilers don't have nicer defaults. (If you run gcc without -W -Wall -O2, you either really know what you're doing or you're doing it wrong; in either case, having those options as defaults makes sense.)


Since the context is "C" not "C compilers", then I'd say C could have a less dangerous switch statement by requiring break/continue to indicate this. Agreed. Why people don't document/annotate for static analysis intentional fallthroughs is beyond me. OK, so we can eliminate one type of bug if we make that change (assuming we fixed all legacy code) -- what about the other legitimate uses of potentially suspect constructs?

Compiler writers can do whatever the heck they want, and if dumping ALL THE WARNINGS was the priority, you'd need three lines of command line switches to turn off the "noise" warnings. Clearly, the a bunch of people see it as "noise" or don't care about writing clean code. That's more systemic of programmers.


There are many cases where fall through switch statements are preferable, but that's almost beside the point. C is basically just portable assembler with syntactic sugar sprinkled on top. Assembler doesn't have less dangerous switch statements and switch statments. Assembler has branching and jumping. That break statement in c is almost literally telling the assembler to jump out of a chain of branch statements after probably returning from a jal of the function within the same case block. Adding an extra switch statement just takes the language farther away from assembler and makes compiling more complicated, which is silly since the unsafe switch statement with break statements will/should produce the exact same assembler (this is different from the difference between using if{}else{ if...} chains and using switch, which will produce different assembler code).

To make it even more fun, there are many use cases for fall through switch statements. My personal favorite is duff's device, which is an unwrapped version of a serial copy that will run faster on certain architectures than a more typical serial copy will run: send(to, from, count) register short to, from; register count; { register n = (count + 7) / 8; switch(count % 8) { case 0: do { to = from++; //this is for a memory mapped register. case 7: to = from++; // use to++ for non memory mapped registers case 6: to = from++; case 5: to = from++; case 4: to = from++; case 3: to = from++; case 2: to = from++; case 1: to = *from++; } while(--n > 0); } }


I thought duff's device was one of the few examples that use fall through switch statements. I was taught to use switches with blocks, because otherwise unexpected things happen.


The defaults are not very good, no, and "how to use gcc effectively" is not part of many C tutorials. That said, you do get most of the benefits by using just -W -Wall.


Sure. And if you get a static analyzer, you solve the problem more or less.

But, most of the bugs I notice in the static analysis output are not possible in the model of other languages. That's... kind of depressing.


Most projects don't need C, I'll agree. But I have a hard time seeing how a language that is on a 1:1 to 1:5 statement-to-machine-instruction language will be displaced by a language that is safer while being at least as efficient and at least as expressive. If you say "but I don't need a 1:1 to 1:5 ratio..." THEN YOU'RE DOING IT WRONG.

C isn't broken. Gah. Why does everyone have such a hard time understanding that? The language does //exactly what you tell it to do//. If that's overrun a buffer, then that's what you get. If it is fallthrough a switch statement, that's what you get. If it is typecast a block of memory that is too small to hold the structure, then that's what you get. In short, for many programs, that means doing stupid things at some time. For many programs, edge cases aren't considered at all. Blame C? It let me?

For every single one of the "warts" of C, there is also a usecase. This is why it will not be "fixed" and why it isn't "flawed". To remove them is to remove a useful function that could otherwise cause more machine code to be needed. THIS IS THE KEY. Minimizing machine code (overhead).

If you aren't good at managing these edge cases, don't need native code, don't need interop with other code, can take a performance hit, or don't want to use automated tools to help you, then don't use C for large programs.

Take it for what it is: a low level, mostly portable, mask over assembly. Stop complaining about C, start complaining about programmers.


When people complain about the security of C, it's usually the buffer overflows.

But this switch/case/break statement behavior exists in lots of languages. I almost expect it. (It doesn't in Ruby and I usually have to stop and think about it when geeking out in that language.)


Bluntly put, you miss the point, which is - C could be redesigned to preserve the characteristics that make it good, but have them off by default.

Most C code isn't that critical. Say, about 97% of the time.


setuidgid is written in C. No bugs yet reported.

Plan 9 doesn't have the concept of superuser. "sudo" is moot. Plan 9 is written in C.

Just two examples that come to mind.

Maybe it's not the language. Maybe it's the person using it.

There is never going to be a perfectly "idiot-proof" language that shares the power of a language like C. But many still seem to believe this is possible.


Better design alleviates the problem out of the gate, for a system-specific solution. But if our best solution is people gotta get smarter... man, I don't know what to say.

As someone who supports ordinary programmers, expecting smarts is not the right track. expecting average and mistakes is the right track. If your foundation is not robust by default, your systems will not be robust by default either.

To mix metaphors: Better to design foundations that keep the gloves on by default, then when you need to, take them off. You can see that design philosophy in Mercurial.

Suppose - for argument's sake - that C had two different pointer types: checked pointers for arrays and unchecked pointers for arbitrary memory and that conversions between the two were relatively seamless. What difference would that make, 97% of the time?


Well, something like "privileges" seems like a design that is a holdover from a time of shared computing. We've adpated it for other uses, and for some people it's become some sort of security religion. If we assume it exists for the purpose of protecting users of the same computer from each other, it has no need to exist in most cases. Computers are affordable enough that we can all have our own. There is rarely necessity to share computing time.

Plan 9 settled the issue.

But sometimes I think some people have come to love the mental challenge of UNIX permissions. It is like a game to them. I made a function that produces all possible permissions for a file, including the nonsensical ones, so I could display permissions as octal instead of the usual ls format. Have you ever thought about how many possibilities there are? Too many.

The setuid concept is interesting and must have seemed quite innovative at the time. Doesn't surprise me they filed a patent for it. But today, I could just as well do without the complexity.


Sudo has always felt like a bit of a blunt instrument to me.

For example having to type the root password every time I want to run a vaguely administrative program or install something etc gets old fast.

Not to mention the number of times I've made a bunch of edits to something like a conf file in nano and then gone to save and got "permission denied" where the only solution seems to be to exit nano and run it again (and do the edits again) with sudo. Surely it should be able to prompt me for the password at save time?


Sudo has authentication timeouts built in, so you don't have to type in your password (your password, not root's) every time you want to run an administrative program. Also, the fact that you have to sudo is one of the reasons why Unix is more secure than the Windows "administrator by default" approach.

As for your last point, you're complaining about nano. It would be perfectly possible for an editor to sudo when writing to disk.


This is something that seems to be the case on vi and emacs too, I have never seen a unix program which prompts me for sudo password at the moment that it needs to do a privileged operation rather than requiring running via sudo.


On emacs, you can C-x C-f /sudo://etc/hosts and it'll do what you think that means, including prompting for your password only once in $TIMEOUT. It's the same base mechanism that lets you C-x C-f /user@host:path for remote editing through ssh.


When I vi a read-only file:

     "/etc/mail/aliases" [Read only] 36 lines, 1023 characters
Its simply a matter of remaining keen when doing admin tasks. Sure I have to close/re-open with sudo but its prompting me up front that I won't be able to save it.


> Sure I have to close/re-open with sudo but its prompting me up front that I won't be able to save it.

No, you don't, and yes, you will:

  :w !sudo tee %


You can do something like this instead (IIRC):

    :w !sudo tee % > /dev/null
It's a bit of a hack, but it works.


I have, but it was a patched version.

I think there's a sense, which I generally agree with, that random applications asking for your password is something we don't want users to get used to.


Windows hasn't been "administrator by default" since Vista.


It's still administrator by default, they just prompt you to click ok on every administrative action, something users still do blindly.


And Ubuntu users don't blindly enter their passwords at gksudo prompts?


Huge difference between hitting OK and typing the admin password.


Really? Both are pretty instinctive to me, I do them with little thought. I even switch capslock on and off mid-password without thinking about it to type a capitalised portion.


Yes really. Anyone can hit ok, but not anyone knows the admin password. Someone can't walk up to your machine while you're away and install software in Linux, they can in windows. Hitting Ok is not sufficient. The point of sudo is to verify your identity, an ok prompt does not do that.


No, it's not administrator by default. The first user created is an administrator, but that user normally runs in a standard user security context. The prompt provides a temporary elevation to administrator level by use of an administrative security token associated with the account. The prompt also runs in a separate desktop to prevent malicious code from interfering with it.

If you try creating a new user through the control panel, you'll notice that the default selection is to create a standard user.


> The prompt provides a temporary elevation to administrator level by use of an administrative security token associated with the account.

Rather what I said. A borked version of sudo.


About your file edits:

-Save your modified file to a temp file and work it out with cp or mv after.

or

-Use Vim, and the infamous ":w !sudo tee %".


> Not to mention the number of times I've made a bunch of edits to something like a conf file in nano and then gone to save and got "permission denied" where the only solution seems to be to exit nano and run it again (and do the edits again) with sudo.

Nano asks you for the file name/location when you save, just save the file somewhere you have write access to then move it with sudo afterwards. No need to retype your edits.


> I've made (edited) something (...) in nano and then gone to save and got "permission denied"

sudoedit handles this exact case. Try it, it's very convenient. (Of course, you have to set at least one of SUDO_EDITOR, VISUAL or EDITOR correctly.)

You also want to know about visudo, which will save you some oh-shit-I-locked-myself-out moments.


If I have to use sudo, I always:

  sudo -i
  # do root work
  exit # or ^D


If you find you can't save a file because of permissions, just suspend nano with ctrl+z, fix the permission, resume suspended job with fg, save file. No need to lose your edits or hassle with temp files.


This snippet is ugly anyway. The case-bodies are not indented, the one-liner if-bodies are missing their brackets, asking for more trouble.


Would be nice if compilers offered a warning for missing breaks in a switch statement, and a pragma to mark fallthrough:

    switch(something)
    {
        case 1:
            // Do stuff
            break;
        case 2:
            #pragma switch_fallthrough
        case 3:
            // Do stuff
            break;
        #pragma switch_nodefault
    }


lint will warn about those cases unless they are marked in the code with /* FALLTHROUGH */. I think even if you're not using lint, such comments are good practice.


This is why I always put a "fall through" comment in place of break if I intend to omit it, and always put a blank line after each case. Once I got in the habit of this, it makes spotting this bug pretty easy.


Why was the incredibly important caveat -- this vulnerability only affects configurations which are using netmask-based permissions -- left to the last paragraph?


I still wonder why they don't do ATDD. Even when you use a statically typed language there is plenty of room for failure.

I recommend using:

- vagrant http://vagrantup.com/

- toft https://github.com/exceedhl/toft/

- cucumber http://cukes.info/

- aruba https://github.com/cucumber/aruba

to write/reverse engineer examples/user stories.


No

Enough with the nonsense. TDD is great for some tools, in some situations. It's not a "solve all". And now comes "ATDD", people should start working instead on inventing acronyms.

vagrant? really, REALLY?

Try QEMU. Try chroot

News flash, sudo is not a RoR application. Especially sudo with all the magic that goes behind executing a process as another user.


"Especially sudo with all the magic that goes behind executing a process as another user."

Your last sentence kills your standpoint totally.

Please check out the mentioned "toft". It's lxc inside vagrant and designed to test complex infrastructure code from the outside in. Nothing web related.


Acronym expansion: ATDD - acceptance test driven development.


C is fine. C is C. If you want a better C, create one. You could name it something clever like C+=1, or C++.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: