Can AI systems experience illusions as in human vision? For instance, look at the picture below (click on the image to zoom in.)
Most people see a hole, shifting to a pyramid, back and forth over time. Because of the shape of the shadows and the bright wall on the left, the brain "knows" that it is a pyramid, yet the sensors (the eyes) report conflicting information. Would an AI object recognition technique provide a different answer each time it "sees" it? Or after more training data is added to the system? I would think AI algorithms can be fooled, but in a different way than biological intelligence (the human brain.)
This picture was posted here, though I don't know if it is a real object, or an AI-generated image. Clearly, the illusion is caused by loss of data (or data reduction as statisticians call it), after projecting a 3-D object onto a 2-D image representation. The solution would be trivial if the angle, in this view, was different, or if we we offered two pictures from two different angles, or a video showing the object rotating.
'Artificial' and 'Intelligence' both of which makes complete sense in themselves. While 'artificial' can be deduced as something that cannot be found in Nature or has been designed by human beings from natural sources, 'intelligence' refers to the general ability of thinking or the capacity to reason.
Every aspect of learning or any feature of intelligence that can be simulated in a machine could be described as Artificial Intelligence. Superintelligent AIs can take things to a different level.
Looking at it, the illusion falls apart quickly because of the asymmetry of the illusory interpretation. I'd guess that an AI system would go down the same wait-a-sec-that's-not-right network that I did. recovery to the restart of analysis with a new awareness of the asymmetries, like a person, would be the trick. Is that an AI skill?