Yaron Lipman: Bringing AI to more people

Yaron Lipman

Yaron Lipman is a professor in the computer science and applied mathematics department at the Weizmann Institute in Rehovot, Israel. Lipman heads a team doing cutting-edge research into the development of new technologies that can allow computers to capture images, analyze and manipulate them, and construct fragments into a coherent image. Lipman spoke to Weizmann supporters in Toronto, Montreal and Calgary recently.

What brings you to Canada?

With the aid of Weizmann Canada, we’re trying to raise awareness of a new artificial intelligence (AI) initiative at Weizmann. Weizmann’s interdisciplinary research institute is doing mostly basic research in Israel with graduate students and researchers. What we are trying to build is an AI infrastructure where there is going to be a core facility in the computer science department, with several satellite functions on campus in other disciplines, like chemistry, physics and biology. The idea is that there is going to be basic AI research in the core facility and the fact that Weizmann is such an interdisciplinary place where students have an advisor from each department will help push the AI knowledge to the satellite departments and allow AI to be used in different fields to accelerate research.

Does this suggest that AI is becoming more and more important in scientific research?

There is a great need to apply AI in different fields. Even in particle physics, they are trying to interpret images of particles using AI. They are also using it in astronomy, chemistry and biology.

I’ll give you one example. There are electronic microscopes that produce high-quality images of small things. Once you get the image, you go over it and cluster and mark the things that you’re interested in. For example, cells. That’s manual work that takes hours, days or months for PhD students.

This problem can be solved, if you have an expert in AI at your disposal. It will accelerate the research tremendously.

Your research is focused on capturing images and manipulating them. Can you explain that?

Our lab is dealing with three-dimensional information. The world around us in recent years has been filled with new sensors – whether it is depth cameras, electronic microscopes, MRIs, fMRIs, all sorts of different sensors gaining data from the world – and there is a lot of need for automatic algorithms that can process this data and extract some quantitative information from it.

The expertise of our lab is in dealing with geometric information. The classical AI used to work with images and sounds. We are trying to adapt these techniques to work with this new kind of data, which is three dimensional.

What is the potential application of this research?

I’ll give you an example of a certain project that we have, called On-Sight.

It is related to medical imaging and the use of hand-held ultrasound machines to check the condition of the heart.

There are several quantitative numbers that represent the condition of your heart and, in order to extract that using those machines, it takes a lot of expertise. You need to be able to place them correctly and then to read this noisy image that you see.

These machines recently became widely available. They’re small, hand-held and connect to your phone, but at this point in time, you cannot really use them unless you’re an expert.

So what we are trying to do is in some sense cancel the expertise part, not the doctor’s part, from this equation.

Basically, if you have this machine,  you won’t need an expert beside you to determine your heart’s condition. You’re going to have a computer program beside you, with a user interface, that will guide you through the process.

It could be a robot, it could be by a family physician or it could be in your own home.

The way we explain it to people is by comparing it to Waze, the navigation program. If you want to drive from here to New York and you don’t know the way, you put it on Waze and Waze breaks it down for you in small, digestible parts, so you can do it by yourself, in terms of go right, left or keep straight. It’s pretty much like this.

Our technology will show you where to place the transducer. It will be like a small game of Simon Says. There will be an image and you will have to imitate it.

READ: EDITOR APOLOGIZES AFTER STORY CLAIMED JEWS SANK THE TITANIC

Don’t people want an expert, someone who is trained to do that? Will they trust the results of the machine if they do it themselves?

Most of the questions we get are about questions of trust: can I trust a machine?

I think there’s going to be a way where we can convince the public that it’s as good as an expert.

You can benchmark it and let it play against an expert. I think there is going to be some sort of a benchmark, some test, where you can evaluate it and compare it to doctors over a long time and you will get results that will convince the general public.

What about doctors? Are they happy with this?

The collaboration is with doctors at New York University (NYU). It’s Weizmann and NYU and they are divided in two. The more technology-interested people, those who saw a lot of science fiction when they were kids, they are happy with it, and there are a lot of conservatives who don’t want to hear about it.

In terms of doctors, what we want to do is have a recommendation system.

First of all, if a doctor uses it, he will get a recommendation and he might use the recommendation or not.

Nowadays, there’s no way to tell if a radiologist can really do this exam well. Studies we’ve done show there is a lot of deviation between different doctors on the same case.

Even when you go to a doctor, you get a result with noise, with randomness. We have to regulate this thing, because it will give you automatically a second eye on everything you do.

Is it your intention to commercialize this and put it into doctors’ offices everywhere so they can do the test themselves?

Yes, of course that would be really exciting if that is going to happen.

What will happen is that there will be an offshoot company from Weizmann and NYU, with a CEO. There’s a business question in the middle, but if you ask me, my vision is that it will enable this machine to get everywhere – nursing homes, schools, third-world countries.

Is there any pressure on you to make any research you do have a commercial application?

The honest answer is that Weizmann prides itself in conducting really deep basic research and puts no pressure on scientists to do anything. This is like the first commandment at the Weizmann Institute and no one can break it.

However, Weizmann enjoys turning basic research into innovation and companies.

You deal with artificial intelligence. Where is this going? Will our society one day be run by intelligent machines?

I doubt it. If I hadn’t been in academia and didn’t know how it works from the inside, I might also get the impression that something scary is happening.

But I tell people I meet that AI is a very simple program, a series of numbers that multiply other numbers and sum up.

And usually an AI machine can do only one thing. It’s only based on the data you give it. It has no conscience, no ability to generalize beyond some simple things. And in some sense, we’re light years away from those apocalyptic images of AI.

Israel is known as a technology leader around the world. Looking into the future, will Israel still have the human capital to ensure it continues to be a world leader in new products and ideas?

One of my biggest pride and joys of working at Weizmann are the students. I think they are a pretty remarkable set of human beings. They are idealistic in the sense that they can turn down big salaries and they are willing to spend a lot of hours. They have incredible drive. At least that generation is safe. What happens to the Fortnite kids, I don’t know.