*This topic of mathematical modelling is something I looked at for a chapter in my thesis, and it is a tool I use in my job. I think it's good to reflect on the capabilities and limitations of the tools we use, so that's what this post attempts to do.*

I'll start with a quote from my thesis (section 6.2):

Modelling is a diverse endeavour, in which different approaches are appropriate depending on the circumstances and purpose to which a model will be applied. Models can be distinguished from each other by a broad range of criteria, including their scale, degree of abstraction and approximation, whether they are intended to be mechanistic or merely descriptive, and whether they include dynamic and probabilistic considerations. Peierls (1980) classified models in physics according to their degree of simplification. The author argued that all types of models can be useful so long as their limitations are recognized and their use is restricted to appropriate circumstances, whether calculations, teaching, or thought experiments. The models presented in this chapter mostly fit in the middle of the categorization scheme of Peierls (1980) as "Simplifications" or "Approximations" where some features of the phenomenon being studied (nitrification in this case) are omitted to provide clarity or considered negligible enough to ignore. Murthy

et al.(1990) provided additional means for classifying models. They divided types of models based on whether they include changes with time and whether they are deterministic or include randomness. Most of the models evaluated below are static and deterministic, although the model of Yanget al.(2008) involves changes with time. Murthyet al.(1990) suggest that models may be further categorized based on the number of independent variables that they use and whether those variables are mathematically discrete or continuous. Dym (2004) emphasized the importance of using a proper level of detail and physical scale when selecting or designing a model.

In the course of my research, I considered different models for nitrification in drinking water distribution systems. That involved thinking about some of their different features and characteristics, and level of complexity. In this post, I want to take a more general look at how such features apply to mathematical models in engineering, regardless of the phenomenon they are aimed at. From the paragraph above, and from other notes I took from the linked references (and re-read recently to prepare to write this post), here are some characteristics of models that should be kept in mind when using them:

- Is the model static or dynamic? That is, does it change with time?
- What is the relevant scale (in space and time)? For example, Brownian motion can often be ignored at a macroscopic scale; and many chemical reactions happen almost instantaneously compared to biological processes.
- Is the model deterministic or stochastic? That is, do the same inputs always produce the same outputs, or is some randomness included?
- Is the model mechanistic? That is, is it grounded in an understanding of the actual underlying processes, or does it simply try to empirically relate inputs and outputs (often referred to as a "black box" model)?
- How many input and output variables are involved? The number of variables used is a way to quantify the complexity of the model.
- Are the variables continuous or discrete mathematically?
- How many coefficients or fitting parameters are there? Too many degrees of freedom can make overfitting (discussed below) more likely.

The Peierls (1980) paper has a lot of insight on the appropriate level of simplification or abstraction for a model, which varies depending on the application: even absurd thought experiments and analogies can aid understanding. The following excerpt is from the paper's conclusion:

What is common to all these different types of models is that they serve as aids in thinking more clearly about physical problems, by creating simpler situations, more accessible to our intuition, as steps toward a rational understanding of the actual situation.

In choosing the term "model-making" in the title, I feared that this might induce thoughts of models railways or model ships. However, on reflection I decided this would not be inappropriate, inasmuch as these also serve to reduce railways or ships to manageable proportions.

The book by Dym (2004) describes a model as a representation in mathematical terms of the behaviour of real devices and objects. The stated purpose of modelling is to describe or explain observations, or predict future ones (or events/outcomes). The author advises to match the scale of the model with a suitable level of detail, and not to apply it outside of its valid scale.

There is a lot of overlap between statistical forecasting and modelling in engineering, so I referred back to *The Signal and the Noise* in writing this post. Nate Silver calls overfitting "the most important scientific problem you've never heard of" and defines it as "an *overly specific* solution to a *general* problem" and "mistaking noise for a signal" (see p.163). I also like the following passage from that book:

As the statistician George E. P. Box wrote,

"All models are wrong, but some models are useful."What he meant by this is that all models are simplifications of the universe, as they must necessarily be. As another mathematician said,"The best model of a cat is a cat."Everything else is leaving out some sort of detail. How pertinent that detail might be will depend on exactly what problem we're trying to solve and on how precise an answer we require. (p.230, emphasis and links added)

*The Signal and the Noise* also has a discussion (in the context of climate change models) about how much complexity is suitable in a model. More complexity is justified when the underlying mechanisms are understood* (i.e. if it is a mechanistic model rather than a black box, as distinguished above). However, statistician Scott Armstrong still warns that, "the more complex you make the model, the worse the forecast gets" (p.388). But Nate Silver notes that you need *enough* complexity, which can be a judgement call.

*I'd give the Activated Sludge Model family that laid the foundation for many wastewater treatment process models as a good example of a somewhat complex model that is justified by its mechanistic nature.

In conclusion, getting the scale, level of simplification, and complexity of a model right is highly important and requires good judgement. A good model will find a balance between accuracy and usefulness. Proper validation and avoiding unneccessary complexity can guard against over-fitting.