I have been fascinated by AI and computer vision for years now (especially given that I'm rather visually challenged myself!)
The Huawei P30 pro was the first smart phone to consistently see better and further than I can, thanks to its designed in approach to optical zoom and AI augmented combination zoom.
That leads me to this discussion; function by design. I only see with one fully functional eye, which is fine generally but denies me the full experience of binocular disparity that makes the world so vibrant for a majority of people.
My world view feels akin to the single camera setup of old, with exceptional people working to recreate visual processing hardware and software to utilise the visual data in their programs. It still feels a bit fragmented though so this discussion is about encouraging people (especially those with disabilities) to think about design to enhance functionality. Why do I mentioned disabilities? They are valuable - you don't take things for granted and I'm willing to bet have encountered obstacles and annoyances which most wouldn't even notice. That experience is valuable in innovation.
Mine led me to encounter this bio fact as I studied Vision as part of Psychology:
FBD: Did you know your eye does all the processing needed to detect an objects edge , thanks to the layout of cells that pass information from the front line light detectors to the optic nerve? Edge detection is designed into the "hardware".
...the brain then takes over to deal with distance, occlusion, object known properties and such so you always recognise a whole object even when it's partly hidden, and know its likely texture, hardness, weight, use etc.... Which has been a hot topic in computer vision and AI - hampered mostly by the processing overhead needed for the first stage of object recognition - finding edges, which biology would have you believe is easy
FBD: Camera position. AI can process depth with any degree of spacing really, so positioning has never really been a topic. I'd invite the collective here to think of this again though from my perspective. What could we achieve, in functionality, by placing the camera arrays at the typical distance that you'd find the human eyes spaced apart? The camera would be able to bring 3d stereographic images to life, be able to see more of an objects surface than a narrower array of lenses, potentially making 3d scanning faster and more accurate.
What are your thoughts? Do you have a disability that could provide a nugget of information for design that most wouldn't have come across? I'd love to hear from you.