The problem is, in order to use it one has to be intimately familiar with and knowledgable of Python, a sophisticated programming language reserved for experienced web developers. Sure, the code is available for anyone to use. It's just a matter of, well.gaining access. There is a host of other outrageously entertaining (and slightly disturbing!) possibilities once confronted with access to the code. In this case, it's just one branch of the code that someone at Google toyed with. But the message was clear: Despite the trippy effects Deep Dream produced, the technology works. One particularly talented developer even went so far as to run a snippet of Fear and Loathing in Las Vegas through the code and produced absolutely bizarre results. And not just dog faces, but dog faces in the strangest places (like above). Image: lot of the first images available online once Google released the code publicly were crammed with dog faces. Since the program has the ability to identify the contents of an image if it is given the parameters, why can't it forcefully hypothesize or infer their presence? The answer is, of course, not only can Deep Dream do so, but the results are astounding (if not completely freaky). Google, in their constant tinkering ways, wanted to experiment with filtering layers in effort to see what would happen if they encouraged the code to look for objects in images - even if the objects were not originally represented in the pictures. "Deep Dream" has proven capable of different, if not more artistic offerings. Some have speculated that it will also go a long way for scientists to further their own research of viruses and improve the nature of independent robotic animation.īut that isn't what this is all about. A program that can, essentially, "see" images and identify what is inside (or on) them can relay that information back to the user through text capations and speech. By "teaching" a computer to recognize objects in an image, it unlocks a facet of artificial intelligence that has the potential to do an awful lot of good. Google's "Deep Dream" research began just this way. This example stands as a microcosm of neural network programming. The work is done for users, so long as they play along by providing the initial information. Google's highly popular "Photos" app (among others) offers the ability to tag faces and in turn, automatically identify said faces in images uploaded henceforth. Image recognition technology has been growing in leaps and bounds over the last decade and has played a role in everything from science and national security to photo-editing and social media. Let the computer review this information, index it, and relay it back. Adjust its capability to include the identification of specific objects. Program it to be able to "learn" (i.e., machine-learning or "artificial intelligence"). Let's make the example as basic as we can manage.
0 Comments
Leave a Reply. |