- > We therefore conclude that theoretically motivated experiment choice is potentially damaging for science, but in a way that will not be apparent to the scientists themselves.
They are analyzing a toy model of science. The details and in figure 1. They have a search space that has a few Gaussians like
f(x,y,z) = A0 * expt(-(x-x0)^2-(y-y0)^2-(z-z0)^2) + A1 * expt(-(x-x1)^2-(y-y1)^2-(z-z1)^2)
but maybe in more than 3 dimensions and maybe with more than 2 Gaussians.
They want the agents to find all of Gaussians.
It's somewhat similar to a maximization problem that is easier. There are many strategies for this, from gradient ascent to random sampling to a million more of variants. I like simulated annealing.
They claim that the best method is random sampling, that only work when the search space is small. But it breaks quite fast for high dimensional problems, unless the Gaussians are so big that cover most of the space, and perhaps I'm beeing too optimistic. Add noise, overlapping Gaussians and the problem gets super hard.
Let's get to a realistic example, all the molecules with 6 Carbons and 12 Hydrogens. Let's try to find all of them and their stables 3D configuration. This is chemistry from the first year in the university, perhaps earlier, no cutting edge science.
You have 18 atoms, so 18 * 3 = 54 dimensions, and the surface of -energy has a lot of mountains ranges and nasty stuff. Most of them very sharp. Let's try to find the local points of maximal -energy, that is much easier than the full map. These are the stable molecules, that (usually) have names.
* There is a cycle one with 6 Carbons, where each Carbon has 2 Hydrogens, https://en.wikipedia.org/wiki/Cyclohexane Note that it actually has two different 3D variants.
* There is one with a cycle of 5 Carbons and 1 carbon attached to the cycle https://en.wikipedia.org/wiki/Methylcyclopentane
* There are variants with shorter cycles, but I'm not sure how stable they are and Wikipedia has no page for them.
* There is also 3 linear versions, where the 6 Carbons are a s wavy line, and there is a double bound in one of the steps https://en.wikipedia.org/wiki/1-Hexene I'm not sure why the other two version have no page in Wikipedia, I think they should be stable, but sometimes it's not a local maximum or the local maximum is to shallow and the double bound jump and the Hydrogen reorganize.
* And there may be other nasty stuff, take a look at the complete list https://en.wikipedia.org/wiki/C6H12.
And don't try to make the complete list when of molecules that includes a few Nitrogen, because the number of molecules explodes exponentially.
So this random sampling method they propose, does not even work for an elementary Chemistry problem.
- That said, random or exhaustive search is a more scientifically useful method than you might think.
The first commercial antibiotics (Sulfa drugs) were found by systemically testing thousands of random chemicals on infected mice. This was a major drug discovery method up until the 1970s or so, when they had covered most of the search space of biologically-active small molecules.
- Related, I was talking to a computational chemist at a conference a few years ago. Their work was mostly at the intersection of ML and material science.
An interesting concept they mentioned was this idea of "injected serendipity" when they were screening for novel materials with a certain target performance. They proceed as normal, but 10% or so of the screened materials are randomly sampled from the chemical space.
They claimed this had led them to several interesting candidates across several problems.
- A few month ago I went to a similar talk. They got a carboxylic acid from a plant (I forgot the name) that has some activity to kill caterpillar that eat corn, and made like 10 or 15 compounds with organic alcohols to get an ester. They tried different doses on the caterpillars and then make a computer model to predict the activity of similar compounds (QSAR). The idea is to use it in a long list of other organic alcohols and try to find a better compound.
But they choose chemical reactions that are usual in the lab, so they guess they will be able to make it work in the lab, and they keep most of the structure without changes. So it's closer to what they classify here as look nearby the known good points instead of a true random search.
- They address this specifically and hand-wave it away:
I think their conclusion is still important to consider, though. It makes a point beyond the practicalities and more towards the philosophy of approach.Moreover, both random and all other experimentation strategies we examined require constructing a bounded experimental space, a challenge that lies beyond the scope of the current work (see Almaatouq et al., 2024, for further discussion).- That is an unrelated problem, that usually is not even a problem.
For molecules, 10 Armstrong away is probably as good as infinite.
For how many bananas should you eat per week to become the chess world champion, you can ask Wolfram Alpha to convert 2400kcal * 7 to bananas and get an upper bound.
I think everyone agree that with infinite time a resources a brute force search is better in case there is a weird combination. But for finite time and resources you need to select a better strategy unless the search space is ridiculous small and smooth.
- I guess I am not following very well -- what exactly is an unrelated problem? Setting a bounded space?
- This is really interesting, but it appears to hinge on an unstated (and unjustified) assumption: that scientists learn by back propagation, or something sufficiently similar that back propagation is a reasonable model.
It also:
* Bakes in the assumption that there are no internal mechanisms to be discovered ("Each environment is a mixture of multivariate Gaussian distributions")
* Ignores the possibility that their model of falsification is inadequate (they just test more near points with high error).
* Does a lot of "hopeful naming" which makes the results easy to misinterpret as saying more about like-named things in the real world than it actually does.
- The existence of "experiments" to choose from in the first place is already theory-given. As soon as you've formulated a space of such experiments to explore, almost all your theory work is done.
- What's more, the existence of data (therefore differentiation of what is and isn't), is theory-laden.
- According to Popper, scientists learn by putting out theories and then trying to falsify them through experiments.
- This remind me of [Why Greatness Cannot Be Planned](https://mythoftheobjective.com). When looking at scientific discovery there many examples of happy accidents. The researchers were not intending to find the breakthrough that they did. It was the willingness to change course and explore a new and interesting thing that they just stumbled onto. Examples: penicillin, superglue, radioactivity, cosmic background radiation etc. I loved the example of Robert Williams who pointed the HST at an empty patch of sky for 10 days. He had his time allocated and no one could stop him but the other astronomers thought it a poor use of resources. It resulted in the famous Hubble Deep Field image.
Counter example is the decades that amyloid cascade hypothesis was the only allowed / funded research of Alzheimer disease.
- In real life, can you choose an experiment perfectly randomly?
You can ask many people to propose hypotheses and choose one at random, and perhaps with a good sample you get better experiments. You can query a Markov chain until it produces an interpret-able hypothesis. But the people or Markov chain (because English itself) has significant bias.
Also, some experiments have wider-reaching implications than others (this is probably more relevant for the Markov chain, because I expect the hypotheses it forms to be like "frogs can learn to skate").
There's a neat book about this: "Why Greatness Cannot Be Planned (The Myth of the Objective")> "We find that agents who choose new experiments at random develop the most informative and predictive theories of the world. "https://www.goodreads.com/book/show/25670869-why-greatness-c...
Incidentally, the author works at OpenAI these days.
- I fully admit that I only skimmed the abstract, but was reminded of an article in Wired about Sergey Brin and his "search for a parkinsson cure".
https://www.wired.com/2010/06/ff-sergeys-search/
He went backwards and started with just collecting an absurd amount of data. Later while talking to a researcher he could confirm years of research with a "simple" search in his database.
- This is a thought-provoking idea but, even if true, I don't think it will gain much traction. We humans like to be right and earn awards for our predictions. A Nobel wouldn't feel quite the same if given to someone who just happened to randomly stumble upon something.
- What, like penicillin?
- I mean a lot of discoveries are things found along the way in search of something else. Look at something like the initial discovery of super glue.
- Weird that this doesn’t mention grounded theory, a social theory toolkit which people poo-poo for Popperian purposes.
- I think they poo-poo it because it tends to produce just-so stories that "explain" known facts while saying nothing about anything beyond them. To an extent, all hypotheses arise from observations (and more specifically, the frisson between observations and theoretical expectations), but you can't just stop there. Grounded theory just feels like empiricism with a soft blur filter.
(This problem is not just limited to social scientists. I think you could, for example, construct a plausible objection to dark matter as an "explanation" that just "saves appearances" on the same basis.)
- Yeah, I’m aware of those critiques and they are all correct or at least draw blood.
What’s interesting about this paper is the suggestion that perhaps empiricism could do with a soft blur.
One might even invoke KJ Healy’s “Fuck Nuance” here as well.
- Induction vs Deduction.
Grounded theory is probabilistically correct. Deduction if correct, is actual reality.
Don't get me wrong, I want to love induction, I have William James of Pragmatism on my wall... but the problems with induction hurt me to my core. I know deduction has problems too, but the Platonic Realist in me loves the idea of magic truths.
- This idea suffers from a number of practical obstacles:
One, in a sufficiently advanced field of study, an idea's originator may be the only person able to imagine an experimental test. I doubt that many physicists would have immediately thought that Mercury's unexplained orbital precession would serve to either support or falsify Einstein's General Relativity -- but Einstein certainly could. Same with deflected starlight paths during a solar eclipse (both these effects were instrumental in validating GR).
Two, scientists are supposed to be the harshest critics of their own ideas, on the lookout for a contradicting observation. This was once part of a scientist's training -- I assume this is still the case.
Three, the falsifiability criterion. If an experimental proposal doesn't include the possibility of a conclusive falsification, it's not, strictly speaking, a scientific idea. So an idea's originator either has (and publishes) a falsifying criterion, or he doesn't have a legitimate basis for a scientific experiment.
Here's an example. Imagine if the development of the transistor relied on random experimentation with no preferred outcome. In the event, the inventors at Bell Labs knew exactly what they wanted to achieve -- the project was very focused from the outset.
Another example. Jonas Salk (polio vaccine) knew exactly what he wanted to achieve, his wasn't a random journey in a forest of Pyrex glassware. It's hard to imagine Salk's result arising from an aimless stochastic exploration.
So it seems science relies on people's integrity, not avoidance of any particular focus. If integrity can't be relied on, perhaps we should abandon the people, not the methods.
- > So it seems science relies on people's integrity, not avoidance of any particular focus.
Science relies on replication. And any real gain society gets that comes from science is a form of replication in itself.
Integrity can't be relied on. But then, complete reliability is not necessary, just enough to make replication work.
And also, science is in a crisis due to the lack (or really large delay) of practical use. We actually don't have any other institution that ensures replication happens.