Computers are able to identify bodies of water and their outlines in satellite images or beat the world’s best professional players at the board game Go by using adaptive algorithms. Programmers train these algorithms by presenting them with numerous examples, allowing the algorithms to learn the examples even in unfamiliar scenarios.
But such methods only work when the decision-making criteria are known, such as we know what a body of water is or which sequences of moves are successful in Go tournaments. Researchers at the Swiss Federal Institute of Technology (ETH) in Zurich have now developed a method that allows computers to not only categorise data but also recognise whether complex datasets contain categories at all.
As ETH Zurich explains in a statement, the researchers accomplished this by using the “act as if” principle and pretending as if they knew where the boundary between different categories was. They trained the network over and over again with a different boundary each time. The network’s ability to classify the data differed depending on the scenario, allowing the researchers to discover that the boundary is located where the network’s sorting performance is highest.
This new method could be used in numerous fields. For example, pharmacologists could extract molecules with a certain possibility of having a specific pharmaceutical effect or side effect from large molecular databases. It could also be useful for analysis of measurements from particle accelerators or astronomical observations.