Idling my mind on some out-of-band thoughts, I started realizing that we (as humans, as participants of the virtual social networks, as participants of real-life social networks having a virtual space representation) are in the process of training the most impressive AI yet to come. The technology and the hardware brains are almost there, spreaded around the globe and our orbital space. It lacks the outmust AI.
Just thinking on the meaning of giving a like/dislike on one of the following combination (youtube/facebook/buzz/tweeter/etc)x(video/photo/comment/text/article/etc) is mostly equivalent on training a huge neural network with AI node-weight (one can view the weights as simply 1/-1, or one can view the nodes’ weights as a more complex formula - for example, given a specific user/node, AI calculates the weight of his/her like/dislike voting based on the familiarity of AI with that user, etc.)
The nodes are contextually aware (eg. text which is easily parseable, video/photo which is at least parseable by the means of meta-information and yet content analysis by audio/video level algorithms are growing) and basically the AI is becoming more and more trained not just about social interactions (which it is already and mostly is), but also about our emotions (eg. when these is a comment fight with a lot of slang and cursing and dislikes and likes going around), our way of thinking and taking decision whether we like/dislike a given piece of information in a given context.
Combine that with the HAD (Human Aided Design&Decision taking, which is exactly opposite to CAD) (eg. people aiding the AI by manually correcting wrongly detected and/or recognized faces/objects in photos/videos)…
…and you pick your result.
Just my threaded thought.
No Comments/Pingbacks for this post yet...
This post has 2 feedbacks awaiting moderation...
A deep dive into brain's curiosities
|<< <||> >>|