ARKit 4 on iPadOS introduces a brand-new Depth API, creating a new way to access the detailed depth information gathered by the LiDAR Scanner on iPad Pro. Location Anchoring leverages the higher resolution data in Apple Maps to place AR experiences at a specific point in the world in your iPhone and iPad apps. And support for face tracking extends to all devices with the Apple Neural Engine and a front-facing camera, so even more users can experience the joy of AR in photos and videos.

Depth API

The advanced scene understanding capabilities built into the LiDAR Scanner allow this API to use per-pixel depth information about the surrounding environment. When combined with the 3D mesh data generated by Scene Geometry, this depth information makes virtual object occlusion even more realistic by enabling instant placement of virtual objects and blending them seamlessly with their physical surroundings. This can drive new capabilities within your apps, like taking more precise measurements and applying effects to a user’s environment.

The Depth API is specific to devices equipped with the LiDAR Scanner (iPad Pro 11-inch (2nd generation) and iPad Pro 12.9-inch (4th generation)).

Location Anchors

Place AR experiences at specific places, such as throughout cities and alongside famous landmarks. Location Anchoring allows you to anchor your AR creations at specific latitude, longitude, and altitude coordinates. Users can move around virtual objects and see them from different perspectives, exactly as real objects are seen through a camera lens.

Requires iPhone XS, iPhone XS Max, iPhone XR, or later. Available in select cities.

Expanded Face Tracking Support

Support for Face Tracking extends to the front-facing camera on any device with the A12 Bionic chip and later, including the new iPhone SE, so even more users can delight in AR experiences using the front-facing camera. Track up to three faces at once using the TrueDepth camera to power front-facing camera experiences like Memoji and Snapchat.

Discover More ARKit Features

Scene Geometry

Create a topological map of your space with labels identifying floors, walls, ceilings, windows, doors, and seats. This deep understanding of the real world unlocks object occlusion and real-world physics for virtual objects, and also gives you more information to power your AR workflows.

Instant AR

The LiDAR Scanner on iPad Pro enables incredibly quick plane detection, allowing for the instant placement of AR objects in the real world without scanning. Instant AR placement is automatically enabled on iPad Pro for all apps built with ARKit, without any code changes.

People Occlusion

AR content realistically passes behind and in front of people in the real world, making AR experiences more immersive while also enabling green screen-style effects in almost any environment. Depth estimation improves on iPad Pro in all apps built with ARKit, without any code changes.

Motion Capture

Capture the motion of a person in real time with a single camera. By understanding body position and movement as a series of joints and bones, you can use motion and poses as an input to the AR experience — placing people at the center of AR. Height estimation improves on iPad Pro in all apps built with ARKit, without any code changes.

Simultaneous Front and Back Camera

You can simultaneously use face and world tracking on the front and back cameras, opening up new possibilities. For example, users can interact with AR content in the back camera view using just their face.

Multiple Face Tracking

ARKit Face Tracking tracks up to three faces at once, using the TrueDepth camera on iPhone X, iPhone XS, iPhone XS Max, iPhone XR, and iPad Pro to power front-facing camera experiences like Memoji and Snapchat.

Collaborative Sessions

With live collaborative session between multiple people, you can build a collaborative world map, making it faster for you to develop AR experiences and for users to get into shared AR experiences like multiplayer games.

Additional Improvements

Detect up to 100 images at a time and get an automatic estimate of the physical size of the object in the image. 3D object detection is more robust, as objects are better recognized in complex environments. And now, machine learning is used to detect planes in the environment even faster.

Link to original article

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.