AI (or Artificial Intelligence) is becoming increasingly popular, and even more so in smartphones as new technology develops to make use of it. The world we live in today seemingly has AI influence around every corner, and the latest AI in smartphones is taking cameras to a whole other level.
AI is driving camera tech more than ever before, and although the Google Pixel 3 was the first smartphone with AI capabilities, we have come a long way since that first shipped in October 2018. So what does AI in smartphones look like? And how is it influencing our smartphones cameras?
AI Camera Tech: Where We’ve Been, Where We Are, And Where We’re Going
The Past
As we mentioned before the Google Pixel 3 was the first phone to make use of AI driven camera tech. Most phones at the time were using dual lens cameras.
These dual lens cameras essentially allowed the user to take pictures with better perspective, and the top range smartphones with this sort of camera allowed for an almost 3D effect from the images that were captured. Google Pixel 3 though, thought a little differently.
They opted for a single lens camera, but they used AI driven technology to implement something called computational photography – an AI system that is capable of replicating optical zoom and then adding more effects on to the image for a clearer and crisper photo than ever before.
If that sounds a little confusing, computational photography is simply an AI trick that allows you to point and shoot the image, whilst the AI features of the camera choose the absolute best settings and features to capture the best image.
Essentially, the phone does the thinking for you. But this was largely the extent of the Google Pixel 3 back in 2018, but things have developed since then.
The Present
Of course, being the pioneering phone that it was, the Google Pixel 3 influenced AI in smartphones massively. It would be very difficult for you to walk to the store now, and pick up a relatively up to date smartphone that didn’t have a camera with AI.
Think about it, every time you go to take a picture, the phone will adjust to lighting, what the image is that you are taking, whether or not it is a selfie or standard shot, and then choose camera settings that will most suit all of those factors.
That’s all AI driven camera tech is really, but most people wouldn’t necessarily identify it as that. But something has to take into account all of those factors and make the best choice so you don’t have to, and that’s where AI comes in. So let’s take a look at some of those features and how they work in a little more depth.
The first step in any AI software is to create something that can almost think for itself. That’s the point of Artificial Intelligence after all. In order to do this, most smartphones now will have something called a Deep Neural Network (or DNN).
Without getting too scientific, this just means that the AI is programmed to think using a system of neural networks not unlike our own brain.
This then allows the AI camera to look at the scene and decide whether it is taking a picture of food, landscape, or a face for example. It can then pass on this information elsewhere (more on that below) to choose the best settings for the job.
The DNN is also integral to face recognition software for unlocking our devices using our face for security, so you can see how detailed the information captured by the DNN really is.
In order for all of this information from the DNN to be used effectively though, something called a Neural Processing Unit (NPU) is needed.
Simply put, this is just the place where all of the information from the DNN is processed, allowing the camera to then select the best settings etc. for the task. It’s built into the processor of the phone, and works in a similar way to a CPU, but is more geared towards supporting AI tech.
The final step that is required for flawless photography, is something called an Image Signal Processor (ISP). The ISP works after the image has been taken. Before the phone’s camera presents the final image to you, the ISP will apply certain changes to the images to correct any flaws.
Essentially it can highlight what needs to be highlighted, and make the objects in the frame even crisper so you get a final image that is perfect. Has your phone ever told you to hold the device steady for a few seconds after taking the picture for example? This is just the ISP at work, before you are presented with the final shot.
This computational photography is more common now than ever before, and different AI software is making capturing flawless images easier than ever before too. Part of AI is being able to learn as well as think though.
And this is present in smartphones too, with something called machine learning. You might notice this in practice when your phone creates those random videos of images captured throughout the year, or sorts your images into categories such as ‘dog’ etc.
This is just your phone learning from the images you take, and sorting them into categories that might be of interest to you. So as you can see, AI camera tech is constantly working away behind the scenes to make capturing expert level images simpler than ever, without you needing to make any decisions for yourself.
The Future
In terms of where AI in smartphones and its ability to affect camera performance, there’s no telling where it might go. We do know that AI is in constant development as people try to create even more intelligent systems, and this will certainly be the case with AI driven camera tech for smartphones too.
Could we seen an AI capable of editing our previous photos for even more perfect results? Or cameras that might suggest an even better shot than the one we’re trying to take? We can’t be sure, but we’re excited to monitor how AI in smartphones develops in the future.