Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Functional Programming Doesn’t Work (and what to do about it) (planeterlang.org)
86 points by twampss on Dec 29, 2009 | hide | past | favorite | 69 comments


This is one of a series of posts by someone with decades of experience doing game programming. He's also very fond of Erlang (he's been using it since 1999!), and he's exploring how well it works for a domain he knows, but which is quite far from Erlang's typical niche.

Try reading it (and his archives, at http://prog21.dadgum.com/archives.html ) in that light. It's not just somebody ranting about how FP is junk after halfheartedly trying to learn Haskell. The "Purely Functional Retrogames" series is particularly good.

I would love to see more blogs that have people seriously evaluating how "newer"* languages hold up for real work, not just page-sized Greenfield projects. As he puts it, "Timidity does not convince." (http://prog21.dadgum.com/35.html).

* I know, Erlang isn't that new.


I just read all of the "Purely Functional Retrogames" posts, and was sorely disappointed that there's no source. Did I miss something here? The posts were interesting on a high level, but basically gave me blue-balls.


Purely functional programming, that is. Not functional programming per se. With that corrected, I agree with the guy.

However, his critique shouldn't be surprising to anyone but blind believers. Paradigms and features all have comfort zones and none of those is large enough to cover even nearly everything.

Analogously to strong/weak typing and dynamic/static binding, you rarely only want either. You either start with strong and/or static and build yourself dynamic and/or weak behaviour by hand. Or you start dynamically and later enforce certain type constraints deemed appropriate. There are a few holy wars in between but you can raise above that.

Similarly pure functions and impure state management ought to support each other, not fight each other. Pure functions are pure, clean, and functional so that mutations can be sparse and well-controlled. Conversely, the state must be destructively updated in some place by the impure code so that the pure functions are relieved from having to bother with it.

Similarly you can start at either of the pure and impure ends but you're bound to end up somewhere in between for any useful program.

You can start with pure functions and build something with them and worry about saving the state later. The original author is right in that that I often start with the latter and feel somehow guilty when writing the state. I shouldn't! Personally, I've noticed that this approach too often equals gigantic mutations on huge, monolithic states that yield huge, monolithic new states that you have to store somewhere.

Alternatively you can start with impure code and clean out stuff into pure functions as you learn how the program evolves. Each time I do this in, for example, Python I get the sense of how fucking brilliant I am. I should do it more often! Personally, while "erring" on the impure side at first I've noticed that the resulting program is often more beautiful as pieces of pure art rise from a pool of impure goo.


I almost didn't realize that this is the Dadgum guy. (The original URL for this post is http://prog21.dadgum.com/54.html, and it's easier to read to boot.) He has vast experience and his articles tend to be excellent. If he has misgivings about FP they are worth taking seriously. Edit: it's clear in context that his misgivings are about 100% pure FP, and he still favors a nearly-entirely-functional style.


The links in the URL you give actually work, unlike the ones at planeterlang. Thanks!


PlanetErlang reproduces whatever is in the rss feed. I just switched the RSS feed of ErlangInside.com to summaries yesterday because anytime i publish something they get the credit (and the backlinks).


I'd like to see a rebuttal to the core criticism:

Imagine you’ve implemented a large program in a purely functional way. All the data is properly threaded in and out of functions, and there are no truly destructive updates to speak of. Now pick the two lowest-level and most isolated functions in the entire codebase. They’re used all over the place, but are never called from the same modules. Now make these dependent on each other: function A behaves differently depending on the number of times function B has been called and vice-versa.

In C, this is easy! It can be done quickly and cleanly by adding some global variables. In purely functional code, this is somewhere between a major rearchitecting of the data flow and hopeless.

I am almost certain a Haskellite will say something about monads or arrows, but I don't know what.

EDIT: I guess another criticism might be "when would you possibly need this?". It smells suspiciously like a symptom of bad design -- these two totally independent, low-level calculations are now dependent on each other. A possible use case might be a tweak to an artificial physics, as in a simulation or game.


This criticism echoes what E. W. Dijkstra said before,

As long as programs are regarded as linear strings of basic symbols of a programming language and accordingly, program modification is treated as text manipulation on that level, then each program modification must be understood in the universe of all programs (right or wrong!) that can be written in that programming language. No wonder that program modification is then a most risky operation! The basic symbol is too small and meaningless a unit in terms of which to describe this.

Using global variables in C is still rearchitecting the data flow. It's just a really obvious way to do it.

You basically have 4 things here: functions A and B and variables a and b. Calling A increases a, calling B increases b. Function A depends on a, b. Function B depends on a, b.

In FP, you would probably pass those variables around. In C or some other language you could store them as global variables or pass them around. Global variables are a shortcut for passing them around. Just as you don't pass around all of the assembly code of the functions you want to use to the next function, you don't need to pass all variables around all the time.

To answer your edit: these two totally independent, low-level calculations are now dependent on each other.

This is normal, just look at any example of co-routines. It's like a two-player game where one player goes first but it doesn't matter which.


>In FP, you would probably pass those variables around.

The point is that you might have to rewrite every part of the program in order to "pass around" those variables, where as in an impure language with global variables, this sort of major rearchitecting is not going to be necessary.


The pure vs. impure stuff is a red herring. It's meant to throw you off the scientific trail. If there are merits to the FP approach, as there are, then one or two drawbacks in a few of the languages aren't a problem. If there are multiple problems with the FP approach, then we discard it and toss out the languages that use it or modify them to only use the useful subset of that approach.

With what little information we have in this example, it would be best to modify the language so that it includes those two low-level functions in its specifications and make those global variables they require.

Modifying the language spec...is that pure or impure I wonder?


>Modifying the language spec...is that pure or impure I wonder?

Neither, it's completely orthogonal to the pure/impure distinction.


Remember that the "FP approach" can be used in so-called "imperative" languages too. The reverse isn't as true.


The reverse is true. Just modify the language. I don't see why people are insisting on this "purity" in approaches. It's obviously leading to difficulties.


Pure has a specific technical meaning here relating to the absence of side effects.

http://en.wikipedia.org/wiki/Pure_function


I think it works both ways.

There are inherently some changes that are extremely hard to make to a well-constructed functional program, and there are equally some changes that are hard to make to a well-constructed procedural program.

I would hypothesize there is no language paradigm in existence that provides optimal efficiency for implementing all possible types of architecture changes.


I think what you're saying here is close to what the OP is arguing. The title obscures this a little bit, but it's clear from the second paragraph that he's arguing against purely functional programming.


I think the first part of your hypothesis is correct, but not quite the second. Most imperative languages support deep recursion and reentrancy, which means that they're a proper superset of (most of) functional languages. You may need to create a "closure" by hand and pass it around, but there's nothing that's fundamentally impossible. The example in the article about adding a side effect and cross-dependency to a deep function really isn't possible in functional terms without changing the design (i.e. adding the state to the function's interface).


Your hypothesis would be correct because languages only scratch the surface of thought. The underlying computations invoked are the important thing and you must understand them before you implement them, no matter which language you're using. Obviously some languages are easier to work with and give you better abstractions to work with making it quicker to write the code but you still have to understand what the heck it is you're really trying to do.


I'm not sure this is a rebuttal, but...

No one disputes that sometimes imperative styles of programming are clearer, or more efficient. Efficient functional programming is a new topic that is growing in importance as it becomes clear that the future is rooted in highly parallel programming.

But these specific complaints mounted at Erlang don't really ring true to me. For one, it's idiomatic to pass state along in functions, and it's not difficult at all to write branching logic based on a variety of cases in state. Now, if you have hundreds of fields in your state you will surely find Erlang to be more awkward than other functional languages, because you can't do things like pattern match and run guards into dictionary structures. But this isn't a failure of "functional programming", it's probably more of the nature of Erlang. Erlang just isn't optimal for that kind of work.

It seems to me like the author's complaint is more "Erlang wasn't the best for my projects and programming style" rather than "Functional programming has failed." Erlang is written by engineers who understood their target domain very well and wrote a very elegant system for wrangling it. Game development wasn't exactly on that list of things Erlang was meant to do well. :)


You make a key point that is often forgotten when people are arguing about programming languages. As part of the process of designing a program, you should pick which language (or programming paradigm) works best to solve that problem. A lot of intellectual time is spent arguing about preferences, and using one language to program a solution that would be much easier in another.

I do a lot of statistical work, and sometimes I use python, sometimes I use C or Fortran, sometimes I use R or PLT Scheme if I want to have some fun. I would agree that I am not a true expert in any of those languages, whatever that may mean, but I can solve the wide range of computational problems that I need to address quickly and efficiently in one of those languages.

This might just come from that fact that an undergrad class drilled into me a healthy disregard for the idiosyncrasies of syntax, libraries and idioms, and instead instilled in me the idea that semantics is the only serious issue for a student of programming languages.


In Haskell you might do the same thing as C. By recasting the logic where A and B are called to include some read/write state you can get "globals" just the same and, yeah, this can be done with Monads (State or even IORef).

The question becomes, though, why are A and B suddenly interlinked? Why was this context not already in place? Is there not a better structure to this data that would allow a minimization of stateful computations? Does this sudden A<->B dependence actually suggest a radically different shape of the code?

A well-written functional program should make these refactorings "easy" by having many useful and well-defined interchangeable pieces, and often it turns out that complex Haskell programs must refactor until the best dataflow structure is found (Theory of Patches for Darcs or the Zipper in xmonad).


>By recasting the logic where A and B are called

Which in the worst case will involve lifting the type of pretty much every function in the program into a monad. Which is not going to be necessary in C.


> The question becomes, though, why are A and B suddenly interlinked? Why was this context not already in place?

Here's an example, roughly taken from a project I did not too long ago (in an imperative language, not a functional language). Suppose you have an e-commerce system. One day, the business decides to start giving out coupons, say for 10% off. Any order can have a 10% off coupon applied to it. All is fine and good, orders now have an optional coupon attribute, you compute order totals in an obvious way, coupons are nicely orthogonal to the rest of the system.

Then some time later, we decide to add free shipping coupons. But, there's a wrinkle: they only apply to ground shipping. Now, you can only add a free shipping coupon to an order with ground shipping; and, also, if the user changes their shipping method, you have to go look to see if they have a discount which now must be removed from the order. Now, two previously independent aspects of order placement and processing are coupled together.

Because this was written in an imperative language and backed with a stateful database store, it was mostly trivial to make the changes required. This scenario is not actually as bad as the situation the OP described, but my feeling is that it would be more difficult to have dealt with in a pure functional environment.

(BTW, this is actually a fairly mild example of the sort of complicated, non-orthogonal rules which come up in e-commerce systems. I picked it not because it was the hardest to translate to a functional paradigm, but because it was easy to explain in a couple paragraphs.)


Okay, good example. At the same time, though, trying to see the problem through a functional lens I'm stuck wondering why there isn't already some stateful context in place. You're linking coupons with the order data and unless you've been just threading that through functions --- which nobody wants to do --- I'd feel like that there would be more structure.

For some reason I'm stuck on the idea of using heterogenous lists to store "things that can affect the order". You apply them in order in a stateful context and produce a finalized order or an error. To add coupons you just make an instance that lets coupons be a "thing that affects the order" and stick it in the list. The code is decoupled, order processing is modular, and you're using a data structure which fits the problem well.

I'm obviously using some 20/20 hindsight, but I think that frequently the initial headaches of refactoring by type turn into insights to how the problem is forcing you to use the right tools.


Do I need to save their return values to avoid calling them too many times? Am I reusing a stale value where I should have called it again, and how should I know that without calling too many times? Do these hidden states need to be visible across multiple cores, and if so how do I do that without imposing a memory fence and a pipeline stall on every call? How does every unit test rewind all these hidden states back to their initial values, so they don't pollute each others' results? This change should be large and even painful, because you're inherently making the program harder to reason about and probably breaking a lot of code without knowing it.


Yikes! This might be an ill-defined problem in some cases -- optimization based on referential transparency in the original functions means that you may not really know how many times these functions were called. Either you have that problem, or the functions were already stateful.


If you've designed something well, then all changes can be easily accommodated.

But to "design something well" requires omniscience, because it depends on understanding the problem, and in what ways that problem will likely change in future. It's a question of fact about the world, not an intellectual, mathematical or computational truth. When you are very familiar with a problem, you acquire this domain knowledge, much of it informally and unconsciously, and then you can design well... or, well enough for practical purposes.

Along the way, in the process of acquiring this domain knowledge, you will make mistakes, and you will need to change things. Fred Brooks: Build one to throw away. You will anyway. Oh, and then you get the second system effect, when you try to correct all the mistakes of the first disaster.

Unfortunately (or fortunately, if you like learning), most of us move on to new projects and new domains so quickly, that we never acquire that level of mastery of a problem domain. How many people have written the same kind of application three times from scratch?


> If you've designed something well, then all changes can be easily accommodated.

I disagree with that. If you can easily accomodate any changes it means you probably over-designed and over-generalized and made your solution too complicated.

> But to "design something well" requires omniscience, because it depends on understanding the problem, and in what ways that problem will likely change in future.

Yes, that is the problem. A lot of good programmers are prone to over-generalizing, even for code that should be simple and straight-forward. Sometimes it pays off, but not always.

> Fred Brooks: Build one to throw away. You will anyway.

I tend to approach programming as experimentation. "Let's build something and try it out" kind of approach. Then "if it works, we'll enhance it, otherwise we'll throw it away and try something else". It seems wasteful but with a language like Python it is easy to prototype things out.


Being forced to re-design code written with the wrong assumptions doesn't qualify as proof that (purely) functional programming "doesn't work".


Almost all code is initially written with the wrong assumptions.


And it should(+) always be refactored to match the new assumptions in a meaningful way.

I get your point, I just meant that "doesn't work" is quite a stretch from "doesn't allow ugly global variable hacks".

(+) http://xml.resource.org/public/rfc/html/rfc2119.html#anchor3


If there is information that needs to be accessible across several not-otherwise-very-interdependent modules of a program, then you need to have something with essentially all the properties of a global variable. That doesn't' necessarily mean that you need a single identifier with global scope, but it does mean that you need something with essentially the same properties.


Since when are global variables ugly hacks?


For me this example is "for functional programming", not againist it - I mean, since when ease of using global variables is adventage ?

You end with code that is hard to reason about.

For me it's good, that making the code complicated to understand is hard.


My initial reaction was also "why would you". Take the "two most isolated functions in the entire codebase" and make them dependent on each other? OK, assuming there is a good reason (not given in the example) to do this, I'd say you achieve this by passing whatever bit of state data you need to drive the behavior in the various calls to functions A and B. No need for global variables to achieve this, nor is it really any easier or cleaner to use them.


>I'd say you achieve this by passing whatever bit of state data you need to drive the behavior in the various calls to functions A and B.

Obviously you can do that. The author mentions this possibility. But his point is that passing the state around may involve adding extra parameters to a very large number of functions, which is much more difficult than just using a global variable.

It seems odd to me to say that faking global variables by clumsy manual state threading is cleaner than just using global variables. It is more error-prone, more verbose, and less indicative of program structure and programmer intent.


Using global variables is far easier in this case and may entail changing a whopping three lines of code: one for the declaration of the variable and one each in the two functions using the new variable. The argument passing variant on the other hand requires changing almost every single function call in the whole program. In this sense, imperative programming is strictly more expressive than purely functional programming (in the sense of the word that IIRCgoes back to Felleisen: a paradigm is more expressive if expressing a construct that is localized in this paradigm requires a global program transform in another paradigm).

Of course this doesn't mean that a program will be easier to read after going the easier to write way too many times...


Well could you not write new A and B functions that invoke the original functions with state parameters added while also providing the state-handling mechanisms. Then all other existing calls remain unchanged. This really just introduces a namespace for what would otherwise be a "global" but it keeps the real "global" namespace unpolluted.


Err, his example is misleading - that's the problem with generalizing and ignoring nuances. The problem here is purely syntactic. Any functional language worth its salt supports lexical scoping, which means symbols in different scopes can shadow each other. So, I can easily say something like this:

  if a > 0
  then let a = a + 1
       in # do smtg with a
  else # do smtg with a
Of course the inner a within the let clause is a "different" a, because outside of the let clause the old a remains. This doesn't have to be slow either, because if the compiler detects that the outside a isn't used anymore, it can simply increment a memory location or a register.

With a little bit of syntactic support from the language, you could easily write this in a natural, comfortable form, which will then automatically get compiled to the code above. In Haskell, it's pretty easy to do that with monads:

  do $ when (a > 0)
       a <- a + 1
(I haven't touched Haskell in a while so there might be some minor syntactic issue here, but the overall point still stands). The point here is that the number of potential branches can be determined at compile time, which makes the problem syntactic.

A bigger problem is when you're dealing with iteration, and you can't know at compile time how long the loop is. Then you can't unroll it, which means the compiler implementors can't use the technique above. But they can use a different technique - recursion, which can later be compiled back to iteration via tail calls optimization, removing any awkwardness. Again, with the do monad, it's trivial in Haskell.

Of course the question is, if you go through all the trouble to build up the beautiful, elegant mechanics that make it possible to write imperative code that gets compiled to purely functional code, that gets compiled back into imperative machine code for efficiency - what's the point? IMO the point is mathematical beauty - something you can't easily put a price tag on. There are some empirical benefits as well, but I'm not sure if they outweigh the troubles. Anyway, there is a great argument here, but the author only scratches the surface of the problem and doesn't dig nearly deep enough to get to the meat of the problem.


That example actually appears fairly regularly in Erlang code.

    A1 = ...
    A2 = ...
And it leaves a bad taste in one's mouth.

I see putting stuff in a variable as a way to 'take a breather' in the middle of code:

    A = some(hairy(function(that() + does() * lots()));
    B = back(into(A) + the(A) fray());
I guess that's not the best example because A is used twice and you could just pop the 'A' calculation out into its own function, but sometimes it's nice to make code easier to read by breaking stuff into discrete steps that are easy to read later, rather than one huge line that does everything and then returns it.


The "A1 = ..." thing is a convention from Prolog, from which Erlang inherited its single-assignment variables (and many other relatively unusual features). People who write code with several levels of X1 = ..., X2 = ... vars are probably writing Erlang (or Prolog) with a thick imperative accent. It's like writing Python for loops using integer indexes - unless there's a specific reason to do so, it's a sign that the author is probably new to Python.

While putting intermediate steps of a calculation into variables can help clean up the code, if there's any sort of conceptual significance to that value, it's worth choosing a better name than X1. "ATC" (with "Avg. Triangle Count" as an end-of-line comment), for example, would actually mean something.


I think the cleaner way to describe that in functional terms is "giving a symbolic name to sub-expressions". There's nothing "stopping" or "starting" about the communication, you're just splitting out some part and labelling it. Obviously "A1" doesn't convey much, but somethign like "successFraction" does. I do this in imperative languages too.


the confusion between real limitations and pure syntax makes more sense when you realise he's using erlang. erlang's syntax is not exactly wonderful. but the reason you would use erlang isn't because it's a great functional language, but because it is a great distributed/reliable language.

the fact that the author is dropping erlang because of (largely) syntax issues, without apparently understanding what they are losing that other languages - imperative or functional - simply don't provide suggests they shouldn't be given that much weight...


I don't think he's dropping Erlang, just deciding that allowing small amounts of imperative / impure code to appear when it greatly simplifies things is a worthwhile trade-off. (His last post, about using the Erlang process dictionary, is along the same lines.)

Also, his archives are worth a read, particularly the Purely Functional Retrogames series. Excellent stuff.


What I didn't get about the SSA example was why CPS wasn't considered. It's literally the functional language equivalent.

As for running a static variable across to unrelated functions, I don't know any language where that's considered a good idea. Well maybe Fortran, but that really doesn't help the argument.


You could make the whole 'a > 0 => increment' thing even cleaner than the first example:

  let a = (if a > 0 then a + 1 else a)
      in #something or other


This guy may be smart, and he may have done a lot, but this article seriously smells like a troll. Isn't this exactly the sort of behavior you're not supposed to engage in?

Two library calls, deep in vastly different parts of the code. And you're going to make them mess with each other? For christ's sake, they're used everywhere, they've been working fine for ages, and now you're going to mess with them, make them intertwined and thus complicated, for one stupid edge case.

You know if you really wanted to do it, you could do an unsafePerformIO, but...

...I'm just not buying it. I can think of many ways that FP (not just pure FP) is deficient, but this is not one of them. The biggest problem I see with FP is that when your data model is sparse, imperative-style state change updates are just more efficient from a programming perspective.

I should note that I'm not the first one to spot this. It was talked about amongst the FPers I knew in school. Lets just say each variable, every function input and function output is a vertex in a graph. When I say that the data model is sparse, what I mean is that the edge count on the vertices is small. How small? I dunno, it's a gut feeling sort of thing, or maybe someone could correct me on it.


Now make these dependent on each other: function A behaves differently depending on the number of times function B has been called and vice-versa...In C, this is easy!...It’s easy to mechanically convert a series of destructive updates into what’s essentially pure code.

These are both easy tasks in Haskell. Just use the state monad. You also get 100% certainty that the state isn't leaking into places where you didn't want it to leak to.

Lack of controlled mutable state is a failure of Erlang, not functional programming.

Also, counting function calls would actually be fairly straightforward in Erlang as well. You create a process which counts function calls to B. A then queries this process when it needs to find out how many function calls were made to B. Slightly harder to program, but it works even in the distributed setting.


That's not quite fair though, the problem as he states it is to do this after you've written the program. So while yes, you could use the State Monad that might not be a trivial change to the code. Things might be even worse if the function was already written in another Monad, Monad Transformers can be just a little tricky. Unless of course you write everything in the State Monad just in case you ever have to do anything like that :)


This is really a misleading title.

Should have been something more like:

  "Purely Functional Programming doen't always work"


Everything pure in programming will tend to have similar problems I guess. I like languages where imperative, object oriented and functional constructs can be mixed together, and I think Ruby is one of the best languages from this point of view.


Upvoted for pragmatism. I like Python for this as well.


upvoted because that comment made too much sense to sit at 0.


function A behaves differently depending on the number of times function B has been called and vice-versa.

In C, this is easy! It can be done quickly and cleanly by adding some global variables. In purely functional code, this is somewhere between a major rearchitecting of the data flow and hopeless.

In Common Lisp, I'd DEFVAR a variable with dynamic scope, then LET its value be incremented on each call to B. I'd never use SETQ or its variants. There's no major re-architecting here, and no breakage of the functional style. Dynamically-bound variables are like implicitly passed-through arguments to every function call.


Um, in purely functional code isn't this by definition not possible? As in that the definition of "functional" means that it has no side effects and the return value depend only on the input values?


With dynamic variable binding, there are no side effects, because the LET is undone as soon as its scope is exited.

Think of dynamic variables as constituting merely the implicit passing of a symbol->value hash table as an argument to every function call, and a LET as a local modification to this hash table.

So:

(defvar a 10)

(defun x () (if (= a 0) t (let ((a (- a 1))) (x))))

behaves the same as:

(defun x (&optional (a 10)) (if (= a 0) t (x (- a 1))))


That said, this only works if all the calls to B are still on the stack. I'm not sure what the problem being solved is so I might have misunderstood the intention here.


The second example, single assignment form, is useful for walking through the function based on time/computation-sequence. Very good for making assertions in the middle of the code somewhere.

edit: I forgot to mention that Dijkstra had some complaints when Functional Programming first reared its head, http://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EW...


I can say: a' = a + 1

What's so confusing about that? This is not a shortcoming of FP.


Yeah, it was a weird thing for him to mention.


Take a look at mozart-oz, a multi-paradigm/model programming language. it is a rich language able to perform OOP, dataflow, functional, declaritive and and can hide observable non-determinism. Though it might not be fast enough to handle your particular case of gaming. grab "Concepts, Techniques, and Models of Computer Programming" for a read.


This is why I like Clojure's pragmatic approach. FP doesn't solve every problem easily, but often your core challenge is handling state transitions. Clojure attacks mutable state better than most languages out there, lets you get functional when you want to, but still allows for reasonably easy imperative code for things like Java interop.


The problem the author finds with pure functional programming is very well known and is one reason people never use a pure functional language, sticking instead to practical functional languages such as Erlang (that has processes), ML dialects (that have references), Haskell (that has IORefs and unsafePerformIO), Scheme and Common Lisp (that have full support for imperative programming), Clojure (that has Vars, Refs, Agents and Atoms), etc.

I think the culture surrounding each of those languages does look down on using the mutation features (and for good reasons), but it's not like the message is "never use them", but rather "don't use them if you don't have to".

Maybe the fact that he doesn't emphasize that his gripe is with pure functional programming and calls it plain "functional programming" is a little misleading.


For the reading disinclined: It’s not a failure because of soft issues such as marketing, but because the further you go down the purely functional road the more mental overhead is involved in writing complex programs.


Somehow, the links in the article do not work for me (they all link back to the same article, and searching with various techniques for those links) does not result in anything useful.

But, overall I guess some FunctionalWeenie learnt about NoSilverBullet, to put it into C2-wiki terms (so don't take the mild burn too serious :) ).


No. This guy is the real thing. He has been doing hard-core game programming since the early 80s and serious Erlang hacking since 1999 (!), and has a long record of thinking deeply about programming and writing about it. This is one of the few programming bloggers worth taking seriously.

Oh, and the links work fine in the original post (http://prog21.dadgum.com/54.html).


Yes, I am reading the original posts now and he is right, the preceding posts are posts I remember reading and they do change the light shed by the articles.


Ok, I can't extend this into a ridiculously long series of blog posts about "what I've learned" etc. but I can sum up this and every other remotely similar article about how such and such technology or method is insufficient for every task:

"No one thing is good for everything."


I'm a bit of an erlang noob, but my projects have started to reach the complexity where keep track of the flow of information gets convoluted. However, the advantages that it (FP/Erlang) offers as far as providing side-effect free scalable code is too great to ignore. I understand that in his case he needs very fast access to data that 'should' be globally accessed. In which case a global variable makes sense. So far, it has been a bit annoying not having 'global' variables in Erlang. (not even in modules)

However, good languages like Erlang offer many alternatives. Mnesia has near real-time access speeds, which you could for example implement in your render thread/process. It also has things like per-process dictionaries/hash, and ets for dirty and even faster access to 'global' data.

I suspect that a lot of it has to do with the architecture of the code. I tend to prototype fast, so I run into these issues often. But I tend to keep mental notes as to how the code would be better broken off into modules, reduce the number of passed arguments, (and makes those to the fastest possible primitives.) and move any 'global' data to mnesia.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: