'How to view higher-order functions and IO-actions from a mathematical perspective?
I am trying to understand functional programming from first principles, yet I am stuck on the interface between the pure functional world and the impure real world that has state and side effects. From a mathematical perspective,
- what is a function that returns a function?
- what is a function that returns an IO action (like Haskell's IO type)?
To elaborate: In my understanding, a pure function is a map from domain to co-domain. Ultimately, it is a map from some values in computer memory to some other values in memory. In a functional language, functions are defined declaratively; i.e., they describe the mapping but not the actual computation that needs to be performed on a specific input value; the latter is up to the compiler to derive. In a simplified setting with memory to spare, there would be no computation at runtime; instead, the compiler could create a lookup table for each function already at compile time. Executing a pure program would amount to table lookup. Composing functions thus amounts to building higher-dimensional lookup tables. Of course, the entire point in having computers is to devise ways to specify functions without the need for point-wise table lookup - but I find the mental model helpful to distinguish between pure functions and effects. However, I have difficulty adapting this mental model for higher-order functions:
- For a function that takes another function as argument, what is the resulting first-order function that maps values to values? Is there a mathematical description for it (I'm sure there is, but I am neither a mathematician nor a computer scientist).
- How about a function that returns a function? How can I mentally "flatten" this construct to again get a first-order function that maps values to values?
Now to the nasty real world. Interaction with it is not pure, yet without it, there are no sensible programs. In my simplified mental model above, separating pure and impure parts of a program means that the basis of each functional program is a layer of imperative statements that get data from the real world, apply a pure function to it (do table lookup), and then write the result back to the real world (to disk, to the screen, the network, etc.).
In Haskell, this imperative interaction with the real world is abstracted as IO actions, which the compiler sequences according to their data dependency. However, we do not write a program directly as a sequence of imperative IO actions. Instead, there are functions that return IO actions (functions of type :: IO a). But to my understanding, these cannot be real functions. What are they? How to best think about them in terms of the mental model outlined above?
Solution 1:[1]
IO is a data structure. E.g. here's a very simple model of IO:
data IO a = Return a | GetLine (String -> IO a) | PutStr String (IO a)
Real IO can be seen as being this but with more constructors (I prefer to think of all the IO "primitives" in base as such constructors). The main value of a Haskell program is just a value of this data structure. The runtime (which is "external" to Haskell) evaluates main to the first IO constructor, then "executes" it somehow, passes any values returned back as arguments to the contained function, and then executes the resulting IO action recursively, stopping at the Return (). That's it. IO doesn't have any strange interactions with functions, and it's not actually "impure", because nothing in Haskell is impure (unless it's unsafe). There is just an entity outside of your program that interprets it as something effectful.
Thinking of functions as tables of inputs and outputs is perfectly fine. In mathematics, this is called the graph of the function, and e.g. in set theory it's often taken as the definition of a function in the first place. Functions that return IO actions fit just fine into this model. They just return values of the data structure IO; nothing strange about it. E.g. putStrLn might be defined as so (I don't think it actually is, but...):
putStrLn s = PutStr (s ++ "\n") (Return ())
and readLn could be
-- this is actually read <$> getLine; real readLn throws exceptions instead of returning bottoms
readLn = GetLine (\s -> Return (read s))
both of which have perfectly sensible interpretations when thinking of functions as graphs.
Your other question, about how to interpret higher-order functions, isn't going to get you very far. Functions are values, period. Modeling them as graphs is a good way to think about them, and in that case higher order functions look like graphs which contain graphs in their input or output columns. There's no "simplifying view" that turns a function taking a function or returning a function into a function that takes just values and returns just values. Such a process is not well-defined and is unnecessary.
(Note: some people might try to tell you that IO can be seen as a function taking the "real world" as input and outputting a new version of the world. That's really not a good way to think about it, in part because it conflates evaluation and execution. It's a hack that makes implementing Haskell simpler, but it makes using and thinking about the language a bit of a mess. This data structure model is IMO easier to deal with.)
Solution 2:[2]
What is a function that returns a function?
You were almost there:
Composing functions thus amounts to building higher-dimensional lookup tables.
Here's a small example, in Haskell:
infixr 2 ||
(||) :: Bool -> (Bool -> Bool)
True || True = True
True || False = True
False || True = True
False || False = False
Your lookup table would then take the form of a case-expression:
x || y = case (x, y) of (True, True) -> True
(True, False) -> True
(False, True) -> True
(False, False) -> False
Instead of using tuples:
x || y = case x of True -> (case y of True -> True
False -> True)
False -> (case y of True -> True
False -> False)
If we now move the parameter y into new local functions:
(||) x = case x of True -> let f y = case y of True -> True
False -> True
in f
False -> let g y = case y of True -> True
False -> False
in g
then the corresponding map-of-maps would be:
+-------+-----------------------+
| x | (||) x |
+-------+-----------------------+
| True | |
| | +-------+-------+ |
| | | y | f y | |
| | +-------+-------+ |
| | | True | True | |
| | +-------+-------+ |
| | | False | True | |
| | +-------+-------+ |
| | |
+-------+-----------------------+
| False | |
| | +-------+-------+ |
| | | y | g y | |
| | +-------+-------+ |
| | | True | True | |
| | +-------+-------+ |
| | | False | False | |
| | +-------+-------+ |
| | |
+-------+-----------------------+
So your abstract model can be extended to higher-order functions - they're just maps from some domain to a co-domain consisting of other maps.
What is a function that returns an I/O action (like Haskell's IO type)?
Let's answer the simpler question:
What is an I/O action (like Haskell's IO type) [from a mathematical perspective]?
...from a mathematical perspective? That's an ironic question, considering mathematics itself is abstract:
In a preliminary sense, mathematics is abstract because it is studied using highly general and formal resources.
The Applicability of Mathematics (The Internet Encyclopedia of Philosophy).
...that includes abstracting from the existence of an external environment filled with I/O devices and their reliance on effects. This leaves languages like Haskell, which strive to be as closely based on mathematics as possible, with a dilemma:
-
How must interactions between a program and an external environment (consisting of, e.g., input/output-devices, file systems, ...) be described in a programming language that abstracts from the existence of an outside world?
-
One useful property of expressions is referential transparency. In essence this means that if we wish to find the value of an expression which contains a sub-expression, the only thing we need to know about the sub-expression is its value. Any other features of the sub-expression, such as its internal structure, the number and nature of its components, the order in which they are evaluated or the colour of the ink in which they are written, are irrelevant to the value of the main expression.
(emphasis by me.)
So right now (2022 Feb) there is no practical way to view I/O itself from a mathematical perspective because mathematics just doesn't have one.
I/O: it's the missing Millennium problem...
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | HTNW |
| Solution 2 |
