· the fast track towards algorithmization ·

Fighting the hype by understanding the basics.

Have you ever even thought of challenging the fact that applied science may be necessarily different to science applied? After all, the scientific method is built upon the scientific side only of "applied science" and its assumptions are largely far from reality. Both, expert heuristics and the scientific method have their pros and cons while optimizing business impact.

Let's dive deeper into this challenging field through another Sergio's post. 

Zoom-out, it all makes sense: vol. 2

Applied science is not science, and science applied is not applied science.

Vol. 2

Applied science is not science, and science applied is not applied science.

May, 9th, 2022

There are a lot of misunderstandings when it comes to innovation through applied science. Let me try to be a bit polemic here yet as eloquent as possible so that we start understanding applied science slightly better by merely zooming-out a little.

First, you need to note that Academia is an industry itself - the industry of papers. If you want to make it as a scientist you have to publish your ideas. Fair enough. And for that, you need those ideas to be good, you need an influential academic network (yes, regrettably, here as well!), and you have to follow some artificial and natural rules.

Which artificial rules? Not surprisingly, those that allow you publishing a paper - this format on this journal, that maximum number of words on that other one... The whole pack. I hate that. I understand it - because it is an industry. But I hate it - because it kills those innovations with large barriers to entry that need longer intros or those that could be more clearly explained with far less wording. Those artificial rules surely affect the quality of the papers overall and the natural widespread of knowledge. But, to be honest, that's still not really the point that I wanted to make here.

The point is easier to discuss. And it is embedded within the natural rules required to publish a paper.


Well, not surprisingly, you have to elegantly isolate the technique that you want to prove - your value add as a scientist - so that your peers can analyse it.

And, here's the thing: your value add, when you are a scientist, is rarely the most accurate solution to the problem you are tackling in a paper. The paper has to be far more generalist than a business problem - actually, the business problem is just the excuse, the realistic touch to make it more understandable. The core of its value add typically is an incremental innovation of a former model. Basically, you take as a seed a previous model that was grabbing the attention of the ecosystem, you tweak it here and there (those tweaks take years of training and tones of creativity, by the way), you get someone else (hopefully, a pope in the area) to sign the paper along with you and, there you go, you have a tier one paper.

So there are two crucial aspects that change the whole interaction business-academia based on the previous point:

1. As marked above, this process leads to incremental innovation. All we've seen naturally erodes the odds to publish disruptive innovation at well-known journals. Disruptive innovation goes against the standards of the industry - hats off from here to all those who manage to surpass them. And as such, beyond being a boring burden to scientists, it becomes a major issue to society.

2. A paper that solves for this or that business problem is not really solving it. It is decorating a model within that business context for the sake of clarity. This means that you don't have a complete solution to the problem by following the paper. Just one more tool to fix it. And one of the best places to witness this disconnection is the Finance arena where every year there is a new, elegant model that promises to bring better, more orthogonal returns which finally yields very poor results - not out-of-sample but out-of-paper, in the real life.

To me, after all these years, the beauty of the solution to a business problem lies in the combination of tools and models. In having an advanced M that allows you orchestrating different Ls within your ML problem. In having the M so advanced that it furthers allows you augmenting it with the experts' feedback - i.e. Augmented Machines- which are of utmost interest when there are changes of paradigm - when past data has little to say about future data, as in the current markets conditions. But that requires a massive investment in designing the M and fine tuning it ad-hoc to the business problem. And a very long intro in a paper. And it wouldn't have a seed paper to isolate it elegantly... So, it wouldn't fit in the academic industry. Those disruptive that sneak in are the ones that generate new schools of research. The ones I value the most - whether I ultimately agree with them or not.


But, in the interim, we have to settle for understanding the pros and cons of scientific papers with an applied use case vs applied science.

So, there you go, academic papers are rarely the solution to a business problem because they are not thought to solve a business problem. They are science decorated with applied use cases, not applied science. And they are rightfully so. It is our mission to leverage them to surpass problems scientifically. It may be that we are misunderstanding the role of the business problem in the paper or, symmetrically, that we use the term applied science for something that is not science.

Note that this overall idea, the lack of disruption throughout academia, was several months later picked by Nature.

Thanks for reading!

Site was made with Mobirise template