Artificial intelligence, the center of the future industry, and Unity

  • subject: Virtual Space, Artificial Intelligence, and Unity
  • Lecturer: Oh Ji-hyun – Unity / Evangelist Team Leader
  • Presentation area: Machine Learning, Artificial Intelligence, Metaverse
  • Lecture time: 2021.11.17 (Thu) 15:00 ~ 15:50
  • Lecture Summary: We will share insights on artificial intelligence that will change the world and the virtual space (metaverse) that will develop it, and introduce the Unity product line to which artificial intelligence is applied.
  • “In the future, artificial intelligence will become the center of the world and will change a lot. Therefore, the biggest question will be whether we will be dominated by artificial intelligence or will dominate artificial intelligence.”

    This is the story that Unity’s evangelist team leader Oh Ji-hyeon raised as a topic before the full-scale lecture.

    Computers, which became the starting point of the third industrial revolution, have greatly changed our world over the past 30 years, and the next player is artificial intelligence. Artificial intelligence will be at the center of change over the next 30 years. In fact, even at this moment, artificial intelligence is being used in various industries such as automobiles, robots, mobility, vision computing, and games.

    Through this lecture, team leader Oh Ji-hyun talked about the infinite development potential of virtual space and artificial intelligence, and emphasized the role of Unity in it. In addition, we had time to introduce 4 representative product groups related to the artificial intelligence industry with various examples.


    ■ Metaverse and artificial intelligence – Digital Transformation


    Recently, as the word metaverse spreads like a fad, a lot of negative views on virtual space have arisen. Although it is a word that has existed for a long time, it has only recently been used publicly, so there is a lot of talk about its definition and reality. Moreover, in Korea, it is especially true because it mainly deals with ZEPETO, Roblox, virtual real estate investment, and NFT investment.

    However, the core value of Metaverse that team leader Oh Ji-hyun thought was ‘digital transformation’. It is the metaverse that moves many things that happen in reality to the digital space and introduces the things obtained in the digital space back into reality so that virtual and reality can create synergy with each other.

    A number of industries, including automotive, manufacturing, and architecture, are already using this virtual space. For example, automobile company Nissan unveiled ‘I2V (Invisible-to-Visible)’, which converges reality and virtual reality at CES 2019. Oh Ji-hyeon, team leader, repeatedly emphasized, “The metaverse is being used in a broader sense in a broader and more diverse field than is commonly thought.”

    If Nissan’s ‘I2V’ connected car shows the near future of research that incorporates the metaverse, autonomous driving represents the present time. The prevailing opinion is that autonomous driving will be commercialized in 10 years, and according to those who actually study autonomous driving, the algorithm itself is very advanced. The issue here is training.

    The key to artificial intelligence is to expose it to various situations, learn it, and make it smart, but if you experience this kind of trial and error in real life, it takes too much time and money, and the risk is too high. Therefore, virtual space is a very good tool for training artificial intelligence. All companies researching autonomous driving are researching using virtual space.

    Automobile-related companies are natural, and large companies we know, such as Google, Apple, Samsung, Microsoft, and LG, are all devoted to self-driving research. It is for the operating system and platform that will be underpinned by autonomous driving. They aim to make autonomous driving a killer content and embed their own platform.

    Oh Ji-hyun, team leader, said, “Unity will be included in various platforms that use artificial intelligence, including autonomous driving. . It is also my wish, and I predict that it will be like this in reality.”

    Next, we started introducing 4 representative Unity product families that can be used in the artificial intelligence industry: Simulation and System Graph, Robotics Package, Perception, and Machine Learning Agent.


    ■ Autonomous Driving, Robotics, and Image Recognition – Simulation, Robotics Package, Perception


    Unity Simulation and System Graph are products that help you simulate and test artificial intelligence within Unity. Autonomous driving applies not only to cars, but to anything that moves, such as robot vacuums, robot deliveries, and walking robots. Because of this, you are faced with a lot of situations, and if you test this on only one PC, there is a limit. So, Unity provides such a simulation that can be done in the cloud system.

    Another important element in autonomous driving is the sensor. Typical examples are cameras, radar, and lidar. The camera is a normal camera as we know it. Due to its long history, the quality is high and the price is low. All cars are equipped with cameras by default. However, there are several limitations regarding light. Another widely used sensor is a radar that uses radio waves, but it also has a weakness in that it is sensitive to climate.

    Therefore, the sensor that is becoming a hot topic in the automobile industry these days is lidar. Lidar is the same concept as radar, but fires a laser instead of radio waves. Check the distance by shooting a laser in all directions. The camera doesn’t have intuitive depth data because it’s a 2D image, but Lidar doesn’t. Conversely, LiDAR’s disadvantages are that it is expensive, noisy, and very crude in appearance.

    However, team leader Oh Ji-hyun believed that time would solve these shortcomings. In addition, lidar is being actively used in drones, robot vacuum cleaners, and industrial robots as well as automobiles. As a result, several LiDAR manufacturers are providing virtual datasets that can be tested in Unity’s virtual environment.

    The system graph is a package that can control and test the system in a load graph manner. In addition, since Basic Sense provides an integrated environment, it can be easily applied even if you are not familiar with or not familiar with Unity.

    Not only autonomous driving, but also robotics is an area where artificial intelligence research is actively taking place. The Unity Robotics package is literally a package that can research and test robotics. Robot automation is already being used in many places, but it has limitations because it moves according to a set schedule. For example, if one robot twists, the entire production belt stops. Therefore, robots that make decisions by themselves and move by themselves are attracting attention.

    In addition, even after robot automation, post-production work that requires small and precise movements was difficult for the robot to cope with the environment or variables that occurred, so there was no choice but to do it manually. Oh Ji-hyeon, the team leader, predicted that this will also be replaced by artificial intelligence as time goes by. In this case, the most used is ROS (Robot Operating System).

    The core of the robotics package is that it creates an environment where ROS and Unity can communicate. In the past, if you used a real robot for research, you can now move to Unity and conduct research and development in a virtual space. Compared to autonomous driving, which is the same field of artificial intelligence, robotics is still at a stage where more research needs to be done. Nevertheless, it is positive that modern technology is developing so rapidly, and that Unity is contributing to that development was the idea of ​​team leader Oh Ji-hyun.

    Unity can also be used for image recognition. Image recognition is to distinguish what was recorded through an image or video, and since it is also machine learning-based, it requires a lot of learning, so a method of creating and inputting virtual data through a virtual space can be utilized. That’s what Unity Perception is.


    ■ Machine Learning Agents – Suite for game developers


    Finally, the product most closely related to games, machine learning agents, appeared. A machine learning agent is an environment where machine learning agents can be developed and tested in Unity, and the most basic is reinforcement learning. Reinforcement learning can simply be thought of as potty training in dogs. If you use the toilet pad well, you are rewarded, if you don’t, it gives a negative signal, and repeats this to teach you to use the toilet pad.

    This behavior also applies to machines. If you create a certain situation and let it behave arbitrarily, it learns by itself while receiving repeated rewards and penalties. In the past, when setting up NPCs or monsters in the game, all rules were coded one by one.

    Unity Machine Learning Agent has created an environment that can work with existing machine learning libraries such as PyTorch and TensorFlow, and provides several algorithms and curriculum for developers who are not familiar with artificial intelligence.

    Team leader Oh Ji-hyun showed how Unity’s machine learning agent can be applied in practice through various demonstrations. The first was to make the ball hold up without dropping it on a square box. There were only three situations that the sensor could control: the angle, position, and speed of the ball falling, and the process repeated the process of giving a penalty if the ball was dropped and continuously rewarding it if it did not drop, resulting in the desired result.

    There were also demonstrations in which earthworms were made to crawl to a target position or to walk to a target position by attaching arms and legs and bipedal walking. A step forward was the work of moving with quadrupedal gait. In the demonstration presented by team leader Oh Ji-hyun, when a branch is thrown, the dog runs to the location and comes back with the branch.

    In a more game-like demo, several machines work together to defeat a target and use a key to unlock a door and escape. These algorithms can make the monsters and auxiliary characters in the game smarter.

    One of the genres in which machine learning agents are used the most recently is match 3 puzzle games. This is because AI can quickly and accurately test hundreds of levels. Several game companies, including SundayToz, have already introduced or are researching AI.

    Finally, a simplified version of autonomous driving, mentioned earlier, could also be implemented through a machine learning agent. It’s a racing game. It is a task of riding a cart on a set course, and by applying the concept of a lidar sensor that fires a laser in all directions, it completes the course in the shortest distance without hitting a wall.

    Reference-www.inven.co.kr