I’m Totally Blind. Artificial Intelligence Is Helping Me Rediscover the World.
Oct 11, 2023
If someone told me, a couple of weeks prior, that I would be taking pictures of everything that crossed my path, I would have laughed in their face. But there I was, sitting on the sidewalk, looking to capture the perfect shot that would allow me to learn a little more about the world I am a part of: the expression of the guide dog who is always by my side; the bustle of a busy street full of buildings, cars, and signs; the box of desserts I just bought, wondering whether it looked appetizing enough to bring to a family dinner. I can’t see these things, which are so easy to take for granted, with my own eyes. But A.I. has now brought me as close to being able to do so as I’ll probably ever be.
I was born totally blind, and my visual world has always been determined by what well-meaning people can tell me about my surroundings. To appreciate all the details of a room or to read a menu in a restaurant, I was dependent on someone else. When I took photos, I often recorded short voice notes describing where I was and how I felt, hoping I could someday pair the two and bridge the gap. Most of my camera roll was filled with photos taken for others to appreciate, since no one could sit for hours with me to describe the way the sea crashed against the rocks or the details of a busy, lively street in Italy. The more concrete details, in the end, were always left to my imagination—which, though vivid, always needed more.
When I first heard about Be My AI—a new collaboration between Open AI and Be My Eyes, an app that connects sighted volunteers with blind people who need help via video call—I didn’t let myself get too excited. Be My AI promised to allow blind people to receive an A.I.–generated description of any photo we uploaded. This was a tantalizing prospect, but it wasn’t the first time a tech company had promised to revolutionize the way people with disabilities access visual content. Microsoft had already given us Seeing AI, which in a very rudimentary way provided a rough idea of what was going on in the images we shared, and which allowed us—again, in a fairly basic way—to interact with information contained in written texts. But the details were missing, and in most cases we could know only that there was a person in the picture and what they were doing, nothing more. Be My AI was different.
Read the full article here.