BitDepthFeatured

Helping computers to see

3 Mins read

Above: Computer vision imagined by Sergey Nivens/DepositPhotos

BitDepth#1106 for August 15, 2017

It takes a little while for anyone getting involved in photography to begin to tell the difference between looking at something, and truly seeing it.

The realisation is usually a sharp one, and it’s usually the first step in moving from taking photographs to making them.

Dr Naila Murray speaking at the Teaching and Learning Center, UWI. Photo by Mark Lyndersay.

Dr Naila Murray, an Arima born, Trinidadian senior scientist and formally the Head of the Computer Vision Group at Naver Labs Europe (formerly Xerox Research), based in France, is working on software that will help computers to analyse the pixels they input through sensors and turn them into usable, actionable information.

This is one of those things that sharply drives a spike between science fiction computing and the real world of digital information processing, the sudden realisation that things humans learn within a few weeks of birth are likely to end up taking decades to teach to a machine, and even then, the process will probably be imperfect.

We come into the meat world with the most sophisticated computing device yet seen, and the human brain can marshal as much as 70 per cent of that analytical capacity to the task of seeing.

“Human vision is very perceptive, but it is also deceptive,” said Murray.

“What we see is in our heads, is the result of processing by our brains.”

The digital vision specialist proceeded to demonstrate with alarming ease just how simply the brain’s analysis patterns can be short circuited.

Feel free to have your mind blown here and here, with a whole collection here that’s sure to make you doubt everything you see from now on.

“Computer vision development is all about replacing the brain as an interpretive system,” Murray said.

Computer vision research seeks to not just find ways to align the analytical capacity of modern computers with human vision, it also hopes to do things that humans cannot.

Human vision, for instance, is passive. It depends on light from external sources. Technology is not limited in that way.

We see just .1 percent of the available spectrum, it’s the most efficient slice of a very large range of wavelengths, but we are entirely blind to the infrared and ultraviolet spectrum.

Murray’s work is dedicated to bridging the gap between pixels and meaning and the challenges for computers are significant.

Much of human vision is learned knowledge. We observe, and we compare what we are seeing with what we already know.

So something as simple as understanding a body’s pose or a person’s gesture, what their bodies are doing in different clothing, the relationship they have with their background and our ability to recognise objects in context even when they can’t be seen clearly constitutes a massive database of information that we gather intuitively.

Success in unlocking this capability lies at the heart of enhancing computer vision, which already manifests itself in our daily lives through the face detection technology built into photo apps and social media, biometrics, optical character recognition, goal line analysis in sports and automated medical analysis of MRI scans.

Rapidly evolving technologies include the virtual reconstruction of the planet’s surface in three dimensions from 2D satellite imagery; the work being done on autonomous driving vehicles and virtual systems for use in medical rehabilitation.

For now, fooling computer vision is not terribly difficult.

A simple overlay of image noise can totally derail a computer’s view of an image and even lead it to see something else entirely. Such errors can be embarrassing, and as we trust computer analysis of the real world more, they can also prove deadly.

And there are looming issues with privacy and personal security.

Facebook and Google are both working in this field and hold an unprecedented database of images from throughout the world.

So I asked the good doctor about what happens when these companies can analyse all those images at computer speed, allowing them to know everything about everyone, everywhere.

Dr Murray paused a moment and smiled.

“Well, so will governments,” she replied.

Her hope is that legislation, public awareness and good sense will allow these new technologies to be controlled and managed.

We’ll see how well “Don’t be evil” fares in the face of digital omnipresence.

🤞 Get connected!

A once weekly email notification of new stories on TechNewsTT. Just that. No spam.

Possible UI Glitch. Click top right corner to dismiss 👉

Get Connected!

A once weekly email notification of new stories on TechNewsTT.

Just that. No spam.

Related posts
FeaturedOpinion

Why can’t Samsung get its camera system right?

3 Mins read
Add in the standard 27mm lens and you should have quite the arsenal of glass built into a device you can slip into a pocket. Except that you don’t.
BitDepthFeatured

At CES, Samsung shows big screens and bigger ambition

3 Mins read
President Hyunsuk Kim, Head of Samsung’s Consumer Electronics Division, speaking at the company’s press conference at CES. Photograph by Mark…
FeaturedVideo

Samsung on the CES show floor (Video)

1 Mins read
A look at additional items of interest revealed by Samsung and the impressive construct that was their display presence on…
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Share your perspective in the comments!x
()
x