A Review on Quantum Image Processing

This paper is a review of research in quantum image processing (QIP), storage, and retrieval. It discusses current issues with silicon based computing on processing big data for machine learning tasks such as image recognition and how quantum computation can address these challenges. First this paper will introduce the challenges Moore’s law presents to traditional computer processors in addition to an introduction to quantum computers and how they address these. The paper will then introduce how quantum computation evolved into the field of image processing followed by a discussion on the advantages and disadvantages found in current research and applications.

I. Introduction

Moore’s law is a common discussion topic involving the evolution of transistors in computer processors. It is the observation that the number of transistors in a dense integrated circuit doubles approximately every two years. This prediction is based on observation and projection of transistor size rather than any natural phenomena. However, this prediction has proved accurate which leads to worry because at some point, transistors can only shrink so small and still prove effective. With the demand for storage, retrieval, and processing of big data sets in machine learning, the need for the transistors to shrink and improve efficiency is high, but the ability to do so is limited.

Figure 1. An example of Logic Gates.

A. Traditional Computation

The computer processor is made up of modules, which are made of logic gates, which are made of transistors. Transistors are the simplest form of a data processor and bear the responsibility of blocking or passing bits through. They act as switches and rely on electrical power. Because it is an electric based system, transistors are really blocking or allowing electrons to go through. Traditional systems in 2014 have transistors about the size of 14 nanometers and 10 nanometers in 2017. These transistors are combined to make logic gates, for example in Figure 1 an “AND” or “OR” gate. Combining logic gates results in complex modules allowing operations such as arithmetic to take place.

The projections of Moore’s law assume there is no limit to the shrinkage of size in transistors, but there is always a limit. If a transistor reaches about a single nanometer, a quantum effect called electron tunneling allows those electrons to “teleport” through. This means that the transistor can no longer stop electrons from passing through, or in other words is always “switched” on. Traditional computers cannot handle these issues associated with quantum physics, and this is where research in quantum computation began [1].

B. Quantum Computation

Research in quantum computers began because quantum computers can handle quantum problems unlike traditional silicon based computers. Quantum computers use qubits, which can be any two-level quantum system, meaning they can spin in a magnetic field or act as a single photon. This is not equivalent to bits in traditional computers, but both are the smallest form of computation for their respective system and are represented by two states 0 or 1 as illustrated in Figure 2. The possible states of 0 or 1 in qubits can make the photon’s polarization either vertical or horizontal. The advantages of qubits are that they are not limited to just either 0 or 1, they can be in any proportions of both states at once, a concept known as superposition. Superposition, in simple terms, is a property of linear systems in which the net response at a given place and time caused by two or more stimuli is the sum of the responses that would have been if each stimulus occurred individually [2]. The quantum superposition represents a weighted “average” where the weights are complex numbers. Practical use of superposition is the encoding of the probability it will be measured to be in one of the basis states of the qubit [6].

Figure 2. Bits in Traditional vs. Quantum Computers.

Like the story of Schrodinger’s cat, once you observe/test the value of a qubit, it must decide on one state, either to be vertically or horizontally polarized. However, while it is not being observed, it can be in a state of superposition, not limited to one state. In summary, the value cannot be predicted until it is observed. There is a workaround through entanglement in which a qubit’s properties can be deduced by measuring just one of the entangled qubits.

Entanglement is when a qubit will respond to another qubits state instantaneously [2]. In more formal terms, quantum entanglement is the correlation between qubits, no matter how far apart they are, and is the ability of quantum systems to produce the same physical result when each is measured. Knowing the state of one qubit therefore allows the realization of the state of another qubit without observing it. This property implies that the quantum state of each qubit cannot be described independently of the others, they are connected. Demonstrating the mathematics behind this theory is beyond the scope of this paper, but some differentiation between quantum entanglement and classical systems can be made through high level concepts. The mathematical formalism for discussing quantum entanglement is through tensor products while classical systems rely on cartesian products [6]. The tensor product describes a system that is made of multiple subsystems (i.e. an image of n x m pixels). A tensor product between two vector spaces is a quotient of an enormous infinite dimensional vector space, while the cartesian product is set on the sum of the dimensions of the two vector spaces.

Qubit manipulation is like logic gates in normal computation with added complexity. A quantum gate takes in an input of superpositions, rotates probabilities, and produces another superposition as its output. Quantum gates can perform all operations performed by classical logic gates with the addition of being reversible. Although quantum gates can perform the same operations, how they do so is through a different set of gates. These gates are the NOT, Pauli-Z, Hadamard, and CNOT. A quantum logic circuit, corresponding to a module in the classical system, is a sequence of these logic gates acting on multiple qubits. Explaining these gates in detail is beyond the scope of this paper.

In summary, a quantum computer creates qubits, applies quantum gates to entangle those qubits and manipulate probabilities, and finally measures the outcome and collapses superpositions to an actual sequence of 0’s and 1’s. This sequence of 0’s and 1’s must be able to interact with classical hardware; this interaction is known as the quantum-classical interface [6]. Quantum measurement, which is observation of the qubits, is performed between the classical hardware and the quantum physical layer as portrayed in Figure 3. This area is an active research field due to the challenges of building larger quantum systems.

Fig. 3. Example of quantum-classical interface [10]

II. Problems: Big Data

Using machine learning algorithms to classify images is a popular research area currently. The goal is to imitate human image recognition using complex algorithms that imitate intelligence. The most commonly used techniques are neural networks, which imitate the structure of the human brains with only a fraction of neurons (or nodes). An example of an image classification algorithm is a convolutional neural network. Explaining this algorithm in detail is beyond the scope of this paper, but Figure 3 illustrates what this process would look like. In summary, it involves shrinking the image into smaller vectors using convolutions and pooling techniques. The smaller, numerical representation of the image is what a neural network is trained and tested on and eventually used to classify new images.

Figure 4. Convolutional Neural Network (CNN)

An important aspect of training neural networks with high accuracy is using very large datasets of the prediction domain. The reason for this is because the neural network needs to see a very large distribution of samples to learn accurately how to predict new samples. Data collection and processing takes a large amount of time in traditional systems, time in days or even weeks. Furthermore, the quality of images has improved over time and processing these high-quality images requires obtaining and saving more information, increasing the demand for efficient storage and retrieval. This leads to the major challenge in classical computers is the lack of flexibility to address growing needs. Furthermore, feature extraction of these images is the most important aspect of supervised machine learning problems. With millions of high quality images, this process is impossible on an average machine and would take days to weeks on a cloud computer with excess memory or utilizing a GPU instead of CPU. Therefore, not only is there a demand to address improving efficiency of the storage of images, but also processing.

A. Data Storage and Retrieval

At a high level, the lack of flexibility in classical computers is associated with the architecture of classical memory cells. These cells are independent of each other and therefore so is the information stored within them, resulting in the only way to correlate the values (i.e. pixels) is either by directly storing correlation values or by execution algorithms to calculate the correlation values from the stored ones [2]. In addition to the method of storing data in a general sense, storing images specifically has its own research field, especially how reconstruction of images can be improved with quantum computation.

Image reconstruction in classical computers requires both storage of light parameters and additional information such as correlation and pixel spatial disposition. This is most pronounced in image reconstruction, in which the binary information of pixels is not enough, the spatial distribution of the pixels is also needed. If there are other tasks being performed on the images, this could require even more additional information. This storage involves a large set of independent bits in which each represents some property of the image. Retrieval then involved independent measurements of those bits and their property. This means that with large images and very large datasets, there is an excessive volume of bits being stored. Image reconstructing also consumes a large volume of memory and processing power because it must process each bit individually. The challenge of volume and reconstruction of images is additive to the problems associated with data storage in classical computers. This demands scalable storage space and efficient methods for retrieval of all the information.

Database management systems are the most commonly used means to store big data. Traditional, centralized database systems face challenges with scalability, consistency, and availability. The main bottleneck in these systems are I/O’s. As the size of data grows, the latency and need for more machinery grows as well. This led to the emergence of parallel and distributed database systems. These systems generally scale better and can process much faster if implemented in a parallel fashion. They are better at handling consistency, scalability, and processing of larger datasets, but as no system is perfect, research is ongoing constantly on improving these systems, especially the tradeoff between consistency, availability, and partition tolerance (CAP).

Modern systems utilize distributed database management systems that are most often hosted on a cloud service. As mentioned, these systems have a tradeoff between consistency, availability, and partition tolerance. Consistency ensures every read receives the most recent write. Availability ensures every request receives a meaningful response. Finally, partition tolerance ensures the system continues to operate despite an arbitrary number of messages being dropped by the network between nodes. The major bottleneck in these systems are network I/O’s.

In summary, some of the current problems in the industry are addressing the demand for highly scalable and efficient storage and retrieval of complex data. Classical systems are limited because retrieval and measurement of data associated with an image is all done independently, resulting in a lack of parallelization in reconstruction and large volumes of data per image. Although distributed databases hosted on the cloud are addressing the issues of scalability, there is no means to address the way images are stored at the lowest level to allow parallel reconstruction and a reduction in the volume of additional data stored per image.

B. Image Processing

This topic relates to the retrieval and reconstruction of images mentioned previously. Currently, means to extract features from images are lacking parallelism and efficiency. Because retrieval and reconstruction involves independent measurements of pixel bits and their properties, there is a waste of processing when this cannot be parallelized or stored in a manner that correlates this bits from the start. Extracting features for a machine learning algorithm has this same limitation. Extracting the features involves measuring each bit and their properties independently and translating them into feature vectors that can be fed into a neural network. To measure relationships between pixels and/or their properties also requires an independent step that may take several iterations. Quantum image processing algorithms address this through entanglement, which we will discuss in the next section.

III. Solutions: Big Data

A. Image Storage

Using quantum computing algorithms, database storage and retrieval shows faster performance relative to classical systems. Grover’s algorithm [8] is an example of such improvement. In an unordered database with n entries, while a classical system will take an order of O(n) to search for an item, Grover’s algorithm takes only an order of O(sqrt(n)). This is especially evident as the database grows larger, showing the improvement in scalability using quantum algorithms. This algorithm is used in most research in the quantum database field, some examples being [14–16].

Research on quantum image processing (QIP) started in 1997 [1] but slowed down until around 2010 when the topic became attractive again. QIP involves research on how to define and represent images using quantum states and how to implement operations based on those states. Novel ways to store and retrieve images in quantum computing demonstrate better performance relative to classical systems. The three principle image formats are Qubit Lattice, Real Ket, and FRQI [2–4]. Qubit lattice, for example, represents pixels as qubits, becoming a two-dimensional array of qubits. However, this format has its own challenges with little improvements to the classical method which led to research in the entangled image model.

Using quantum search algorithms such as Grover’s quantum search algorithm [8–9], vertex positions of qubits can be retrieved at a much faster rate relative to classical systems. This process is used to store an image in a qubit array for later retrieval. This however is for simple geometric shapes alone (i.e. a single triangle), while complex images are made up of many geometric shapes. Entanglement can be utilized to establish nonlocal correlations between the qubits storing vertex locations of multiple, repeated geometric shapes [9]. Retrieving information regarding which particles reside in maximally entangled states is sufficient to locate position of triangle vertices and determine which they belong to. The mathematics behind this formulation is beyond the scope of this paper but is demonstrated in [9].

Figure 5. Entanglement between the vertices belonging to the same shape is represented by the dashed lines and is used to distinguish them from the other.

There are other formats that are more commonly used because of their superior performance over the entangled image model. Real Ket model performs better than both the qubit lattice and the entangled image models because they are not limited to representing images in a two-dimensional array of qubits. It instead uses image quartering to iteratively build a balanced quadtree index with the images represented as quantum states having grey levels as its coefficients [11].

Figure 6. An example of a quadree.

This representation appears to perform a similar task that convolutions and pooling would perform, showing potential for machine learning applications. The most widely used representation for quantum images however is the FRQI proposed by [12–13]. Color and position of every pixel are represented in a normalized state and is then integrated into a quantum state by applying the tensor product. Quantum measurement must be utilized to extract the necessary information for retrieving and constructing the quantum image, this process will be discussed more thoroughly in the following section. In summary, researchers have focused on the FRQI model because it maintains an entangled two-dimensional qubit sequence with greyscale information stored as the probability amplitude of a single qubit in the sequence, making it the most suitable and flexible model for QIP algorithms.

An extension of the FRQI model that uses multicolor is called the multichannel representation for quantum images (MCRQI) [6]. Other models that include color information are novel enhanced quantum representation (NEQR), normal arbitrary quantum superposition state (NAQSS), and quantum image based on phase transform (CQIPT). The same idea of entanglement is utilized to represent this more complex information, for example employing a two-qubit entangled quantum state to represent three pixels.

Table 1. Comparison of Color Image Storage Methods

In summary, there are many novel quantum methods of storing images that all improve the computational complexity of retrieval with an O(n).

B. Processing

Quantum superposition allows for multiple observable states to be processed simultaneously, inducing parallelism from the start. Just one of the many challenges of parallelism in classical systems is simply starting parallel processes. Furthermore, quantum logic gates can easily outperform classical ones [7]. Deutsch and Jozsa demonstrated with their research that a quantum computer can solve certain problems in exponentially less time than classical deterministic computers and in somewhat less time than classical stochastic computers (i.e. one containing a hardware random number generator). These problems are solved using quantum measurement of a quantum circuit’s outcome.

To elaborate, quantum measurement of a quantum circuit’s outcome will produce a probabilistic outcome where each state will be produced with probability one half. Quantum measurement at the highest level is the following postulate: by choosing any basis, we can look to see which one of these basis states the system is in. Unlike classical systems however, the two states do not exclude each other but rather two states interfere and yield a global property that can be expressed as a function. This process of interferences is called quantum interference and was introduced earlier with the concept of entanglement, where by measuring the first qubit only, you can determine properties about the interference between the two with only one evaluation where in classical computers this would take two evaluations. This is additive in classical computers as well, where for larger n, classical computers would require 2^n/(2+1) while quantum computers are always just n [7].

Research has shown that quantum computers can utilize maximally entangled qubits to reconstruct images without additional information [9], improving both storage and retrieval. Storage of images in qubit arrays was discussed previously, but retrieval and reconstruction is another important aspect that quantum computing improves upon. Using a quantum measurement to probe the entanglement shared between the vertex qubits can be used to determine their location. Furthermore, the parallelism that is inherent in quantum systems have been used in fast image searching [8] in addition to image reconstruction. FRQI and the real ket image model can both process all pixels of the image simultaneously, demonstrating their inherent parallelism.

Image segmentation is also an important aspect to image processing in quantum computation. Image segmentation is the process of dividing an image into separate regions, for example finding faces in a picture. This concept is especially important for machine learning when the detection of this regions is automated. In traditional image segmentation, complexities associated with intensity and spatial location of objects in an image make this task challenging, in other words, image segmentation fails in traditional computer image recognition often because a lack of previous information to form an expectation of an image. [9] expands further on their techniques and how this improves image segmentation relative to traditional systems. They continue to use maximally entangled states to store points that correspond to objects within the image and then can detect quantum entanglement to determine which pixels belong to which objects by knowing which vertices correspond to which objects. Detected corners are then used to build geographic shapes that correspond to the object shapes and then using mathematical morphology operators (techniques used to analyze geometric structures) to fill out space bordered by those shapes. The result is knowledge on which pixels belong to which objects. The example they provide is recognizing two ladders in an image. Corners of both ladders are mapped into maximally entangled states and qubits corresponding to those stats are retrieved and used to determine which pixel belongs to which ladder, allowing rapid distinguishing of the two ladders.

Figure 7. Using maximally entangled states for image segmentation of two ladders.

Local feature extraction is one of the areas that is of utmost importance in machine learning and still is in dire need of further research. Local feature extraction on quantum computers through a quantum feature extraction framework based on novel enhanced quantum representation (NEQR) has been proposed by [17]. This format was chosen because the color information from the pixels are stored in the basis states of qubit sequence, making quantum image addition and subtraction flexible. The most relevant part to machine learning is the ability for the gradient of the image’s pixels to be computed for all of them simultaneously due to quantum superposition and parallelism, while this process of parallelism in a classical computer is created after the fact.

These concepts all relate to quantum learning (QL), which is machine learning with quantum systems [18]. An example of QL algorithms was in [18] where the authors used a Schmidt decomposition in addition to its function of describing entanglement to condense the main information of a quantum image into a few large coefficients. They conclude these coefficients are global features of the quantum image. Their algorithm predicts the class which the quantum object (state) belongs to by computing the Hamming distance. Their algorithm is intended to determine which class a new object belongs to based on previous training of images on the predictive model. The complexity of the algorithm is beyond the scope of this paper. This method involves feature extraction of global features on quantum images, but not on the raw image data. This is more applicable to current image classification problems in classical systems, which would result in a smoother integration of quantum physical systems with classical hardware.

The next aspect of image recognition in machines is similarity matching. Research demonstrated that the similarities between two images of the same size represented by FRQI were estimated by the probability distribution of results from quantum measurements [6]. In their example, twelve qubits are used to encode the images and seven quantum measurements are performed to get the probability distributions, that is a smaller portion than a classical search algorithm would need.

IV. Advantages and Disadvantages

A. Advantages

The advantages are limited to the improvement of processing and storage with fewer required resources. Although quantum computers are largely unavailable to the public, there are certain use cases where private or government entities could make, and even do make, great use of them. An example is CCTV in the United Kingdom. These cameras are in action 24/7 capturing 30 frames per second [6]. This would amount to 725,760 x

frames in 7 days for all between 4 and 5.9 million cameras. The demand for faster and more efficient means of storing images would highly benefit systems like this and are the reasons quantum image processing is in demand. Processing in quantum computation is also inherently parallel, which is a common issue addressed in database management systems that are used to store images. This also applies to image processing operations such as reconstruction or image recognition, which is an even more difficult problem to tackle in certain contexts. A final advantage is the room to grow in this field. Companies who develop novel ways to utilize quantum computers for image processing or other machine learning tasks will be leaders in the innovative industry. Techniques learned in image processing can also be carried over to other contexts, such as malware detection of files, using the static file information as an image by using convolutions to extract features for example.

B. Disadvantages

Most of these concepts are theoretical and not necessarily employed in application. There is also a lack of consistent evaluation of these methods in research, many fixate on visual quality which can be subjective. Furthermore, most machine learning research is highly theoretical and has even less application. The use of machine learning is growing heavily, which shows the potential for quantum image processing and quantum learning to grow similarly.

Quantum computers are not accessible to the public because they are very expensive. This means there is a limitation in research and novel approaches because only so many researchers can acquire access to them. This limitation is furthered by the fact that private companies are the most likely to have access to quantum computers and less likely to publish findings or progress in research compared to academia. Expense is not the only barrier restricting their emergence into public economy, integration is another. Quantum-classical interfaces were mentioned in brief, but this idea extends to the concept that before quantum computers can be readily available, systems that can integrate the quantum physical layer with traditional software and hardware are necessary.

V. Conclusion

This article has discussed a quick introduction into quantum computation, how it has been applied in research field of image processing, and how it can be superior to classical computers in image processing. The improvement in quantum over classical methods of storage is the result of the superposition property of quantum states, which also leads to quantum parallel computing. Image representation schemes in quantum computation take advantage of the superposition property of quantum states, especially entanglement, allowing for parallel reconstruction of images from a smaller number of qubits then bits in classical systems.

The promise of this field is in quantum learning. Creating, training, and testing machine learning algorithms (i.e. support-vector machines, k-nearest neighbors, and neural networks) takes a large amount of data, a large processing load in training and the same for testing. With quantum computation, what once took a week to collect data and train a model could now take just a fraction of that time.

References

[1] A.Y. Vlaso, “Quantum Computations and Images Recognition”, Federal Center for Radiology, arXiv:quant-ph/9703010 May. 1997.

[2] S.E. Venegas-Andraca, S. Bose, “Storing, processing and retrieving an image using quantum mechanics”, Proceedings of SPIE in Quantum Information and Computing, 2003.

[3] J.I. Latorre, “Image Compression and Entanglement”, arXiv:quant-ph/0510031, 2005.

[4] P.Q. Le, F. Dong, K. Hirota, “A flexible representation of quantum images for polynomial preparation, image compression, and processing operations”, Quantum Inf. Process, vol. 10, no. 1, pp. 63–84, 2011.

[5] D. J. Wineland, “Superposition, entanglement, and raising Schrödingers cat,” Annalen der Physik, vol. 525, no. 10–11, pp. 739–752, Jun. 2013.

[6] Nour Abura’ed, Faisal Shah Khan, and Harish Bhaskar. 2017. “Advances in the Quantum Theoretical Approach to Image Processing Applications”, ACM Comput. Surv., vol. 49, 4, Feb. 2017. DOI: https://doi.org/10.1145/3009965

[7] D. Deutsch and R. Jozsa, “Rapid solutions of problems by quantum computation”, Proceedings of the Royal Society of London, vol. 439, pp. 553–558, 1992.

[8] L.K. Grover, “A fast quantum mechanical algorithm for database search”, In Proceedings of the 28th Annual ACM Symposium on the Theory of Computing, pp. 212–219, 1996.

[9] Venegas-Andraca, SE and JL Ball, “Processing Images in Entangled Quantum Systems.” Quantum Information Processing, vol. 9, no. 1, pp. 1–11, Feb. 2010.

[10] D.J. Reilly, “Engineering the quantum-classical interface of solid-state qubits”, Quantum Information, Aug. 2015. http://dx.doi.org/10.1038/npjqi.2015.11

[11] J.I. Latorre, “Image compression and entanglement”, arXiv:0510031, 2005.

[12] P.L. Quang, F. Dong, Y.Arai, and K. Hirota, “Flexible representation of quantum images and its computational complexity analysis”, In Proceedings of the Fuzzy System Symposium, pp.185–185, 2009.

[13] P.Q. Le, A.M. Iliyasu, F. Dong, and K. Hirota. 2011b. “A flexible representation and invertible transformations for images on quantum computers”, Studies in Computational Intelligence, vol. 372, pp 179–202, 2011.

[14] Y. Liu, “An Exact Quantum Search Algorithm with Arbitrary Database”, International Journal of Theoretical Physics, vol. 53, no. 8, pp. 2571–2578, Aug. 2014.

[15] H. Miyajima, M. Fujisaki, and N. Shigei, “Quantum Search Algorithms in Analog and Digital Models”, IAENG International Journal Of Computer Science, vol. 39, no. 2, pp. 182–189, 2012.

[16] Dd. Bulger, “Combining a Local Search and Grover’s Algorithm in Black-Box Global Optimization”, Journal Of Optimization Theory & Applications, vol. 133, no. 3, pp. 289–301, June 2007.

[17] Y. Zhang, K. Lu, K. Xu, Y. Gao, and R. Wilson. 2015. “Local feature point extraction for quantum images”, Quantum Information Processing vol. 14, no. 5, pp. 1573–1588, 2015.

[18] Y. Ruan, H. Chen, J. Tan, and X. Li. “Quantum computation for large-scale image classifcation”, Quantum Information Processing, vol. 15, pp. 4049–4069, July 2016.

PhD Student in the UCF Center for Research in Computer Vision https://www.linkedin.com/in/madelineschiappa/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store