Cody Pallo

Cody Pallo

Ideas Info

Toggle Button: Emoji Mood Status Update Application

March 16, 2019

Toggle Button is a 3” round pinback button with a full-color digital display. Users pair it with an iPhone and use the app to pick one emoji to display on the button. Whenever one's mood changes they can change their emoji simply by entering it in the app’s status box.

Users can put the button anywhere they like, it has a regular pinback pin for fabric and a replaceable magnet back for metal. Plug it in via USB and take it anywhere once it’s charged.

Friends can share the current emoji they have displayed on their pin online so they can project their mood in real time. It’s a conversation starter or can signal to keep some distance.

It’s like Twitter only with just one graphic, no words. In the future, the user may choose to share their status above their virtual avatar or physical self with those who have AR/VR access. Perhaps eventually add sound bites and animation, such as Apple Animoji support.


ToolRefinery: An XR Machine-shop Application for Micro-fabrication

March 14, 2019

Would you consider making art in VR if you had an intense amount of tools and materials at your disposal, similar to what you may have in a professional studio? Let’s call the app you would use “ToolRefinery.”

This app can undo and redo, cut and paste; you can paint with the precision of a really fine brush and when you are done you can press “print” and it sends your art “action” data to an advanced robotics fabrication facility to be made alone or in editions.

Say you made a painting in VR much as you would now in Real Reality. You mix the colors on a virtual palette and paint a virtual canvas that you stretched in VR. With the ToolRefinery method, a printing robot mimics the strokes on a real canvas but also stretches the canvas just like you did as well.

Every action is recorded except the ones you discard with undo and except the ones that the AI deems irrelevant for the robot to do, short cuts you could call them.

One advantage is this fabrication is preserved for the future and is infinitely reproducible. So a painting is not just a painting it’s a print. Say you could sew or knit, or build a car, the possibilities are endless.

ToolRefinery could do more advanced things that would cost a fortune to do with pre-bought supplies and linear techniques. Imagine the saving on artist materials alone. Think of all the wasted toxic paints that go into some pieces and the cost to rent a studio could go into storage for the few works that you feel are a success.

The Mirror Network: Travel Space & Pan Through Time Simultaneously

October 3, 2018

Let me ask you this… what if we all wore ultra thin VR Visors with small 360 degrees, volumetric cameras around the perimeter of our heads. The kind that captures 3D space and time (volumetric video.) Imagine people walking around and recording all their surroundings where ever they go. Imagine people documenting their entire daily lives this way. Now imagine people doing this and then pooling their data with everyone else using something I call the “Mirror Network.” Its a AI enhanced mesh of time and space. This is like the internet only think of the immense power here.

With space, you could literally use your hands to manipulate the scale of the world around you in real time. You could shrink the world like you would a 3D model in a modeling program and see it as Google Earth in VR where you can fly around in space. In this mode, there would be the core model of the earth, and every nook and crevice of every apartment building and every home inside and out would be visible for all that use the network. Overlaid on this would be a mesh of data collected from everyone that wears visors. These people will be like timekeepers. So its like google street view. Some people drive around with trucks and record the streets. They try and keep the data fresh but its a lot of work. With this network, the data would always be new and user-generated.

With time, imagine you could scrub through it like a video slider to see it in reverse or forward. So if time were not recorded by a human or a street camera, it would be a little choppy to pan through, but you may not care very much because the world could be stitched together with AI using the collected data and people could be stitched together with their realistic avatar data. With the new iPhones, for example, it scans my face so often that eventually AI will allow someone to pan my face and see how I aged over time. So imagine a clock is in sync with all video recorded to the “Mirror Network.” You would be able to pan time.

So what if you could turn on a person mode? Say you are walking the streets 10 years in the future but you traveled into a “deep” time pan 5 years past and you run into a person you crossed on the road back then. You could someday potentially choose to call that person in the present day in VR. If they are a stranger, maybe they leave their contact available to the network. If these visors were on a person and they had lifelike emotive human avatars, you could call that person from 5 years ago, and they could literally stand beside you and experience that moment with you during your call. This gives people an incredible amount of things to talk about, doesn’t it?

So now we come to this idea of a “mirror person” that hides like a raccoon. With this concept, people may form tribes and sneak around away from people that are not like them. If we all had the information to know where everyone is in time and space at any time and people can travel time and space, what’s to stop some of them from hiding around any corner unknown to that person like a ninja hacker? It’s a little paranoid. However, people would likely sync their lives for last exposure to people that are most like them.

So it’s worth thinking about… is a mirror network a good idea or a bad idea? The real question is… Is it such a powerful tool that it will just need to exist, just like the internet does today?

Angelfish: Robotic Hosts for Holographic Guests

August 9, 2018

I am using this idea as an example of how most of the good emerging technology applications may not be possible for years to come. This is called “Angelfish” it is based on the concepts of facilitator, host, and guest.

Imagine tiny all-terrain flying robots the size of beetles that can be plugged in, charged, and taken out when there are visitors. These robots contain miniature realtime 360 photogrammetry cameras, microphone, speakers, drone propellors to fly with and legs to walk on land, on the surface of the water and swim underwater with. When used by your guests these little machines act as hosts. Put on Augmented Reality glasses like the Hololens or the Magic Leap Creator, and they turn into holographic moving full body avatars of the people you invited to visit you.

To accomplish this, guests wear full body Virtual Reality motion capture suits with facial recognition built into the Virtual Reality headset. The robot scans the whole room produces a 3D model and then enhances it with each pass. That is then broadcast to the guest’s Virtual Reality headset. Guests can use the hand controls in Virtual Reality to fly the robot around the physical environment where they have been set free to explore. They can also use the mic and headphones in their headset to speak to people in the physical environment.

What’s truly great about this is that everyone in either Virtual Reality or through Augmented Reality can see each other at the same time. There can be many people visiting at once and when out everyone can see all the other guests out in public as well. This technology would be high at special events or just hanging out with a friend on a casual day. Avatars are fluid, a guest could look like Tinkerbell one moment while flying around or the next moment be a mermaid while swimming in a fish tank. They could even be performers that everyone comes to see.

These ideas are really not that far off. Someone could begin building them now and maybe even have a prototype in a couple of years. It sounds like an exciting prospect to me. Go ahead and do it if you have the resources. If you need any help, just let me know.

Phantom Actor: An AI-assisted 3D Modeling, 2D Photo Simulation Tool.

August 7, 2018

“Phantom Actor” is a VR concept that I would like to share with those interested. It’s not a new idea really if you follow AI creative application development today, these concepts should seem pretty rudimentary. This idea is kind of a strange consumer product though and has some interesting social ramifications.

As the title states, “Phantom Actor” is an AI-assisted 3D modeling, 2D photo simulation tool. It composes believable photo-realistic 2D photos using 3D scenes, objects, and characters created in any 3D modeling program such as Blender or Maya. By using 2D image data pulled from the web this application transforms 3D modeled scenes into entirely realistic 2D imposter photos.

For example, a photo taken in VR of a task chair model would look just like a real task chair when the filter is run. One could even make a model of a human being, pose that model in the chair and then superimpose a person on it with their available facial and body information. Programs can already accurately make 3D models of faces from standard 2D images of people on the web. It is conceivable that anyone with the right tools could scrape accurate facial data with an image search and make that data publicly available.

With this tool, one could create realistic images of anything happening with anybody doing it. It would really challenge the long-held assumption that proof via digital media equals any kind of fact what-so-ever. Digital media should never be an acceptable means of determining the truth. We should recognize that all media sometimes even physical media can be forged with a web of false information. Never underestimate the power of art and science. Together they can create illusions that challenge our very assumption of reality.

ArtAcclaim: Blockchain Fine Art Network

March 16, 2018

A social game based on the business of contemporary fine art.


I believe Digital Art (Multimedia) will be an essential extension of one’s self-image in the future. Much like any other possession, such as; clothes, homes, or cars.


For this site, there are four categories of users.

Patrons — Just everyday visitors to the site.
Artists — The people that make the art.
Collectors — The people that purchase or sell art to any other user.
Gallery — The people that exhibit the work of an artist and receive a commission.


The ArtAcclaim site operates as the marketplace.

Auction House

The Rules:

Only “Artists” can upload their original art. Art is any digital 2D or 3D visuals, sound, or video. “Artists” give art an initial price based on a fictional point system (coins or cryptocurrency.) Any user can buy art from someone else with these coins. Users can buy 100 coins for one dollar on the site. “Artists” pay 100 coins to upload each art piece they explicitly own the rights to. Coins are only worth 1/100th the value if redeemed for cash on the site.

Artists can sell their own art themselves in their own XR “Gallery,” exhibit it in someone else’s XR “Gallery” (seller takes half of the sale), or auction it depending on if the “Artist” has enough “Acclaim.” Acclaim is essentially notoriety, it is how many “likes” an art piece or gallery gets. Notoriety is the only way an artists’ work will ever gain value. Auctions are hosted by the site, and the site takes a commission.

If an artist sells their work, they get more coins. Once an artist sells it to someone else that person can sell it for more, however, the artist doesn’t receive money for any sale of art they do not own. An artist can, of course, buy back their own work. Once they do, they can sell it for more or hide it. Users can hide an artwork they own, but they can never delete it, (unless it violates copyright or terms.) You must hold the art to be able to hide it. If you are the artist that created work but no longer own it, you can not protect it.

Art that receives the most acclaim goes into an XR Museum hosted by ArtAcclaim. Curators for the site put on shows of emerging artists, well-known artists, and works that stick to a theme or topic.


Each artwork is watermarked by the app and can only be uploaded once. Machine learning analyzes the work and compares them to the others on the site so that each work of art is unique and does not violate the terms. It also compares it to famous works of art already in existence.

Photon Particle: Geolocated Democracy

March 16, 2018

Photon Particle is a geolocated demographic polling application for phone-based XR.


Users create questions with predetermined answers and drop them anywhere they like, as small bubbles. For example, on an overhead sandwich board next to the vegan option or the entrance to the white house. When another user points their phone at that bubble, the number of answers appears as a tooltip. If the user taps the screen, the balloon expands to fill their phone screen so they can read the question and opinions in detail. If the user has never answered the problem, they can do so, or they can choose to change their answer if they have.


Each users answer has a visible lifespan of one week unless they manually refresh their response at that location or remotely via a map. So that means each answer is dropped from the bubbles visualization after a week and the bubble gets smaller unless lots of people keep adding to it over time.

All this information is then indexed with infographics via a search page on the 2D web. So users can dig up old geolocated opinions and see the size on a timeline in graph format. They could search say vegan sandwiches and get the best vegan sandwich in the world by seeing a bubble cloud of popular opinions and answers or be more precise and locate views based on location.

There could, of course, be a trending opinions page with all the worlds biggest questions.

I hope to see great ideas come from this.

Thinkithing: Version Control for Your Thoughts

March 15, 2018

The goal of this product is to eliminate wasteful, time-consuming miscommunication in meetings and highlight useful overlooked side-tangents among collaborators. Traditional non-assisted ideation methods may eventually seem difficult, cumbersome, and inefficient in comparison.


Like many people, I sometimes lose track of facets of conversations. Some examples include; philosophical discussions, introductions to new people, meetings, debates, or even on my own, flushing out ideas for products.

I occasionally feel I’ve lost great moments for sharing valuable information by not utilizing the combined mental space available to the group at the precise moment needed. When we digress, it can feel like explaining a joke. If the opportunity is missed, the idea can go to waste. However, sometimes that information is incredibly useful and vital, just misunderstood.

I feel “Thinkithing” solves this missed opportunity problem by allowing visual conversation traversal via a version control like interface in Virtual Reality.


Version control is a unique way to save versions of computer files. It is based on a tree structure and timeline analogy. For “Thinkithing’s” method of version control, the tree structure is a hierarchy of rooms, and the timeline is a series of committed saves of an object or group of objects that each user has created in any one place.

The visual objects created by users can be any entity, such as 3D models, images, sounds, text, etc. Think of how we use gifs or emojis to express concepts to one another in text messages, well, consider these as critical moments in a conversation or saved “nodes.” Say I put a hat on the emoji with an editing tool and save for example, that creates another node. When it is edited, or another object is added and saved again, another node is formed on the timeline.

In this instance, I used editing tools to adapt objects to get even more expressive and granular with a conversation. “Tools” can be anything from a camera to a scale tool, to a sound console… to any number of creative applications.

Nodes are visualized on a timeline dashboard in front of each user. The user can be switched on that dashboard. So, for example, I can see my friends timeline, not just my own. That way I can isolate user layers in that room and highlight only that user’s objects with a glow.

Also saved is an audio waveform below the dashboard related to each committed save. That audio is each selected user’s freeform discussion for the topic relating to that keep. The sound can be panned and sped up, and audio notes or annotations can be added by other users to any point in time on the waveform timeline. Users can add or delete their own audio relating to past save nodes, and the group will be notified.

Rooms are like whiteboards, they are the creative canvas or the studio to express moments of a conversation in. “Rooms” are displayed as 3D scalable isometric boxes in a tree-like connected structure. Rooms are representations of thought tangents, in version control, these are called branches. Anyone can create a “room” based on a visual object. This is where the conversation diverges. A trunk to that tree may be conceived as a conversation about making a video game, for example, and the branches may be the types of video games to make such as a fantasy role-playing game, action adventure, etc. Rooms can be zoomed in to the point of being a first-person view and teleported around in using one of the controllers or zoomed out to see the conversation as a whole.


This is a non-linear professional VR creative tool, it is not a simple chat application. It is for those who value quality communication. Thinkithing essentially offers the benefit of conversation based time and space travel. In other words, any discussion can be caught up on later or spatially explored, much like cave paintings.

A conversation may start at one point in time and have an undetermined endpoint. Collaboration could evolve over hours, days, months, or even years. These conversations can be saved and returned to later for reference or shared with whomever the user wishes to send them to.

There are many uses besides ideation, others include therapeutic applications such as; solving relationship problems, group therapy, or it could be used by therapists for understanding more about their individual patients.

Thinkithing solves the problem of making inefficient conversation choices among qualified, intelligent stakeholders. It is a tool that could change the concept of the meeting into more practical and productive use of time. It incorporates an efficient visual method of pooling creativity and is assisted by Artificial Intelligence so that the experience is fluid and enjoyable. One day AI may even help in the conversation itself as an intellectual contributor or referee.

MetaGraffiti: Spatial Historical Media Network

November 1, 2017

MetaGraffiti is a media browser that turns the world into a historical library.


What is the experience of a library and how can MetaGraffiti reproduce that experience?

When you visit a library, the process should always be one of discovery whether you go on purpose or haphazardly.

Different general sections should exist in that library.

Those sections should have books on history specific topics relating to particular things, people, and events.

Those books should have pages that have specific information on each historical topic.

Those pages should be written by scholars and authors who have researched common documented memories related to the topic.

There should be assistance in the library that can help you find the information you are specifically looking for.

Media should exist in the libraries to help put emphasis on the specific ideas you are researching.


Here’s how MetaGraffiti works.

Order of events

MetaGraffiti is one site with one database. Much like Wikipedia, all content is interrelated. Like Yelp it is location specific information on a map. Like Tumblr or Twitter, the site depends on feeds. Feeds have their own set of map pins. Feeds are created by historical professionals, the user chooses to follow a feed or not based on preference.

Historical institutions contribute media to a pool of data for a specific place. You must be invited to participate. However, there will be a public media contribution form on the main site that puts the media in a cue for the institution to approve.

Users can search by specific time, place, thing, person, or event in an initial search field. (all results return a list of pins on a map) Map pins are specific to historic locations and contain topics and media related to that time and place. Time & place are navigable. All map pins, when clicked, open to a description page with cards.

Categories of information

Here’s who can use it.

Because it is almost entirely a VR/AR interface, it is geared toward the act of discovering media not reading long passages of text. The site is fully responsive. Geolocation and camera are required for AR otherwise it defaults to a 2D map and Cardboard/HMD VR interface. Desktop/Laptop, Phone, or HMD are needed for VR.

Eospaces: Spatial XR Web Browser

October 22, 2017

Eospaces is an XR web browser designed to be shared with friends and used as a form of personal expression.

To begin, let’s imagine that we had a virtual blank slate or white cube. One that we could decorate with anything we wanted. We could scan our own room, for example, and place it in this space or be totally unreal and imaginative with it. Say we could invite friends over to chat or could pick between different versions of our virtual space by pressing a button. Say we could share individual rooms with various friends, even when we’re not there or with groups via a community room or gallery.

In our space are objects, for example, objects that form a decorative theme, like flowers and trees or interactive things like musical toys, instruments, or books (bookmarks) that open to display flat websites on a big screen or show info in unique ways. It could have paintings and sculptures that could be bought and sold via subscriptions, that we could size and place around our spaces.

I believe this is all possible with open web technologies and the right kind of browser. Each element in our space could be a unique URL and saved via the browser cache. Imagine chatting with a chatbot that shared content with us as objects, this could be the equivalent of search. Maybe we could find these objects in the metaverse via VR or the real world via AR news feeds, bookmark them, take them home with you, share them with your friends. The possibilities for extendability are great.

I personally would love to make art and websites that operated as interactive objects. People need this creativity outside of the confines of a social network like Facebook or Snapchat. Creatives need to sell their work to inspire quality content creation, and brands need to share things that make sense and are not always monumental undertakings like whole VR worlds every time they want to make a site.

Supernalis: The Global Operating System

October 10, 2017

There are some challenges with non-game immersive computing. The primary one is fitting services under the umbrella of an operating system. Some services might include; complex networked communication, creative applications and the distribution of creative assets, how history is recorded and experienced for future generations or just in retrospect of a recent moment.

I believe that engaging with immersive computing will solve these challenges by utilizing a simulation of the earth as the foundation for these applications. I call this concept “Supernalis” or above all. Here are some possible Supernalis services that an earth-like OS might provide to 3rd party app development.


For history, up to the moment panning through time is crucial. Possibly index or even store specific time-based data with a service not just in 3rd party applications.

Creative asset placement

Placing still and animated 3d objects in the streets and sky for people to see in VR and also in AR compliment applications for phones. For example, seeing fireworks on the Fourth of July or placing flowers on the streets, this would hopefully be on a per-application basis.

Archive format & storage solution

Hold traditional file formats like images text etc. under a new more complex file format. Storage and file system for keeping 3D archives for future use.

AI for file retrieval and assistant features

Search files and get help with common tasks.


Web XR, the equivalent to a spatial web. Place webXR links to verified businesses or possibly verified home addresses or place geolocated 3D archives within a social network.

Web payments

Enter live events that businesses throw on their verified sites or to buy 3D archives that one can save and redistribute later.

Mini apps, such as those used on mobile phones

Mini Apps could be in the form of something like a depth based widget type desktop for things like a calculator, a camera, phone, etc.

Current global geolocation and relative avatar

For social networking and communication in a metaverse type setting.



I'm a Digital Theorist. I figure out creative ways to use new media. I've been called a "Digital Scientist" and have titles like "Creative Technologist."

Here's a little bit more about me...

In 1990 I was the youngest business owner in the state of California for having a fully licensed lemonade stand.

From 1992 - 1995, I created an tiny punk record label called Loquacity, which transformed into an even tinier Jazz label called Arcane. In this time I published 3 physical albums.

From 2005 - 2010, I published a Chiptunes album called “Mixing Numbers - Plastic Sushi” and created a novelty toy company called Utopian Key.

In 2013, I created a digital publishing company called Rubber Acorn and published a magazine called “Experimental Digital Publishing."

Froim 2014 - 2018 I’ve explored Art Video Games with my project “Ringin’ Jinkies” and began writing about AR/VR Product Concepts, some of which can be found in the "Ideas" section of this site.

Career wise, From 1995 - 2010 I’ve worked in two Contemporary Art Galleries, studied Media Art and Fine Art and have exhibited my work in galleries and sold to collectors.

From 2009 - Now, I’ve been an Interactive Designer at Gershoni Creative Agency, a Senior UX Developer at Dwell Magazine, a Senior Creative Technologist at Character, and AR/VR Product Designer at two startups.

My mission is to involve myself in the arts in ways which get artitst work seen and shared in the broadest way possible.

Thank you for your interest.


2009 - 2010: Art Center College of Design, Pasadena, CA, Fine Art
1995 - 2016: Santa Barbara City College, Santa Barbara, CA, Media Art

Group Exhibitions

2001: Focused on the Forum, Contemporary Arts Forum, Santa Barbara, CA
2000: Una Fiesta Del Arte, Perch, Santa Barbara, CA
2000: Caffeinated, Contemporary Arts Forum, Santa Barbara, CA
2000: Focused on the Forum, Contemporary Arts Forum, Santa Barbara, CA


SFMoMA Tumblr, "Roast Beef Sushi," 07/06/2012
Beautiful/Decay, "Tiny Being - The Existential Work of Cody Pallo", 01/07/2010
The Independent, Duncan Wright, "CAF up Late", 05/18/2000