I've certainly written code that has found re-use, but those tend to be the exceptions rather than the norms. A lot of stuff feels like special snowflake code. One-off implementations that exist to map data from one object to another, or perform some calculation on a data set that will never live outside of the application it was built for. In spite of all of this, I value the concept of reuse. If this is so, why does it feel like it happens so infrequently?
Code reuse is thwarted by coupling. When you have a Foo that has to convert to a Bar, you're generally left with a piece of functionality that only works when your other systems also use a Foo AND a Bar. Chances are, your system will be lucky to reuse one, let alone both. Foo was built to satisfy some need of your original problem, not the one you're working on today. If we could just operate on strings or ints all day reuse would be easy, right? It's easy to write functionality that does some calculation with numbers or performs some manipulation with strings and use it again and again all over the place. How many of us really work that way though? We're told that we need to use Cats and Dogs, not ints and strings. Our frameworks and toolsets have enormous effort invested in keeping us thinking about Cats and Dogs. Ints and strings aren't useful because ints can represent any kind of number. They don't readily indicate a Cat's age or the number of a Dog's teeth. Not alone anyways.
Perhaps we can write small stuff that yanks out cat ages and dog tooth counts. Then we're working in those basic types that are easy to manipulate, right? I think there's some value here. We've decoupled some data from the objects that store the data. Wait - we weren't told simply having an object with data in it is a form of coupling. I thought it thought it was just when we take we have a scare() method on a Dog that accepts a Cat as a parameter and then sets the Cat's setScared property. What we really want is a ScareAdapter, right?
Well if objects can be coupled to the data they hold, how is this done? One thing I've noticed is that once I've made a Dog able to scare a Cat, even with the adapter we're still talking about Cats and Dogs here. What if I later need a Dog to scare a Dog, or even a Mouse to scare an Elephant? Is there something intrinsically unique about the way in which a Dog must scare a Cat? Do the properties of scaring and being scared have to be strictly defined for each animal? If wide-spread reuse is a thing we really can get to, the answer must be no.
The thing I'm starting to realize is that objects, which we should really think of as data with behavior attached, couple themselves to the data they hold via that behavior. What is scaring really? You take an animal and set its scared property to true. Now it doesn't matter who is being scared. Dogs, Cats, Mouse(s), Elephants, etc all all able to be scared because all of those things are just buckets of data in which your scare() functionality can work its magic on.
It turns out this is the tree that Functional Programmers have been barking up for a long time. I've long thought there are things I like about FP but ultimately I don't see how you write practical applications with it. If you can't have mutable state, then how does the machine do anything besides get warm when your software runs on it? You need to write to buffers and streams and databases in order to accomplish some real work. At some point, the tires must hit the pavement, right? This isn't helped that historically the examples for FP have been super abstract math stuff. Ok great, you can write code that does operations on lists of numbers. I don't have an application that speaks that domain language. My applications deal with shopping carts, shipments, rentals, mailbox messages, game entities, etc. Number manipulation isn't so hard - and so showing me trivial examples doesn't really impress to me how an application structures itself and how I go hunting down a problem when somehow a shipment didn't get all of the data it needed. I can manipulate numbers easily with my current code using my current tools. So where's the benefit?
I've seen a lot of smart people gravitate towards FP or even make a living using it, so I was hesitant to totally dismiss it, but I did need some kind of ambassador from the land of FP to lay things down for me in terms of the language I speak. Well if you've ever had to tutor for software engineering before, you'll find it's damn hard. You basically have to teach another person how to think. How the hell do you even start with that? Is there a book "How to Think for Dummies"? "Learn to Think in 24 days"? Let me know when you find it. I don't blame the FP folks for having difficulty teaching their stuff. I've heard it explained that all applications have to have some concept of math, or at least we all know how to work with ints and ints aren't really specific to a domain, bet it driver development, games, web stacks, etc. As I learn more about FP, I think working in basic stuff like numbers is where the simplicity of FP comes from. It's not a matter of saying "Everyone has to do some kind of math in their software at some point so we'll all just collectively teach at this level". It's because that's really the place where they spend a lot of time. You write some functionality that knows how to operate on ages or counts and then you apply them to cats and use them to tally teeth on dogs. This form of decomposition is very fundamental to how FP works, and I think FPers at times can take that for granted as we do object dereferencing.
There's a lot more to FP than just figuring out how to work your way down to basic types and then going nuts, but I think accepting that this is a huge way in which the problems are approached is a huge factor in the success in learning it. With that in mind, maybe we can find some learning material that helps emphasize these approaches, even if it's from a high level document with pretty boxes and lines.
If you've done functional programming and came from imperative land, what helped it click for you?