Research Proposal: X-ray Images Enhancement

[ad_1]

INTRODUCTION

1.1 Digital image

A digital image is essentially a two-dimensional array of light-intensity levels, which can be denoted by f(x,y), where the value or amplitude of f at spatial coordinates (x,y) gives the intensity of the image at the point. The intensity is a measure of the relative “brightness” of each point. The brightness level is represented by a series of discrete intensity shades from darkest to brightest, for a monochrome (single color) digital image. These discrete intensity

shades are usually referred to as the “gray levels”, with black representing the darkest level and white, the brightest level. These levels will be encoded in terms of binary bits in the digital domain, and the most commonly used encoding scheme is the 8-bit display with 256 levels of brightness or intensity, starting from level 0 (black) to 255 (white). The digital image can therefore be conveniently represented and manipulated as an N (number of rows) x M (number of columns) matrix, with each element containing a value between 0 and 255 (for an 8-bit monochrome image), i.e.

f(0,0) f(1,0) . . f(0,M-1)

f(x,y)= f(1,0) f(1,1) . . f(0,M-1) ,where 0 ≤ f(x,y) ≤255.

. . . . .

f(N-1,0) f(N-1,1) . . f(N-1,M-1)

Different colors are created by mixing different proportions of the 3 primary colors: red, green and blue, i.e. RGB for short. Hence, a color image is represented by an N x M x 3 three-dimensional matrix, with each layer representing the gray-level distribution of one primary color in the image.

Each point in the image denoted by the (x,y) coordinates is referred to as a pixel. The pixel is the smallest cell of information in the image. It contains a value of the intensity level corresponding to the detected irradiance. Therefore, the pixel size defines the resolution and acuity of the image seen. Each individual detector in the sensor array and each dot on

the LCD (liquid crystal display) screen contributes to generate one pixel of the image. There is actually a physical separation distance between pixels due to finite manufacturing tolerance. However, these separations are not detectable, as the human eye is unable to resolve such small details at normal viewing distance (refer to Rayleigh’s criterion for resolution of diffraction-limited images [1]).

For simplicity, digital images are represented by an array of square pixels. The relation between pixels constitutes the information contained in an image. A pixel at coordinates (x,y) has eight immediate neighbors which are a unit distance away:

(x-1, y-1)

(x-1, y)

(x-1, y+1)

(x, y-1)

(x,y)

(x, y+1)

(x+1, y-1)

(x+1, y)

(x+1, y+1)

Figure 1: Neighbors of a Pixel. Note the direction of the x and y

coordinates used.

Pixels can be connected to form boundaries of objects or components of regions in an image when the gray levels of adjacent pixels satisfy a specified criterion of similarity (equal or within a small difference). The difference in the gray levels of two adjacent pixels gives the contrast needed to differentiate between regions or objects. This difference has to be of a certain magnitude in order for the human eye to identify it as a boundary.

1.2 Image processing

Image processing Is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it [2].

Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of Multidimensional Systems.

Many of the techniques of digital image processing, or digital picture processing as it often was called, were developed in the 1960s at the Jet Propulsion Laboratory, Massachusetts Institute of Technology, Bell Laboratories, University of Maryland, and a few other research facilities, with application to satellite imagery, wire-photo standards conversion, medical imaging, videophone, character recognition, and photograph enhancement.

Digital Image Processing consists of several steps. The first step is image acquisition which is acquiring a digital image. When a digital image has been obtained, the next step is important for digital images that is preprocessing. The key function of preprocessing stage is to improve the image in ways that increase the chances for success of the other processes, produce better image quality and reducing noise. The next stage deals with the segmentation of image. Image segmentation partitions input Image into its constituent parts or objects.

The next step is description and representation. Representation is the raw data transformation into a descriptive form suitable for computer processing. Description deals with feature extraction those results. descriptions are necessarily task specific. The last step is recognition. Recognition is the process which assigns a label to an object based on the information of the object. Interpretation assigns meaning to recognized objects.

1.3 Image preprocessing:

Image pre-processing is the term for operations on images at the lowest level of abstraction. These operations do not increase image information content but they decrease it if entropy is an information measure [3] [4] The aim of pre-processing is an improvement of the image data that suppresses undesired distortions or enhances some image features relevant for further processing and analysis task. Image pre-processing use the redundancy in images. Neighboring pixels corresponding to one real object have the same or similar brightness value. If a distorted pixel can be picked out from the image, it can be restorted as an average value of neighboring pixels. Image pre-processing methods can be classified into categories according to the size of the pixel neighborhood that is used for the calculation of a new pixel brightness.

image enhancement is necessary to improve the visual appearance of the image or to provide a better transform representation for future automated image processing such as image analysis, detection,segmentation and recognition [5][6]. To discern the concealed but important information in the images, it is deemed necessary to use various image enhancement methods such as enhancing edges, emphasizing the differences, or reducing the noise .

In this thesis, it will be applied one of enhancement methods on x-ray images to increase both the accuracy and the interpretability of the data.

We know that digital images have enveloped the complete world. The digital cameras which are main source of digital images are widely available in the market in cheap ranges. Sometimes the image taken from a digital camera is not of quality and it required some enhancement. There exist many techniques that can enhance a digital image without spoiling it.

The enhancement methods can broadly be divided in to the following two categories:

1. Spatial Domain Methods

2. Frequency Domain Methods

In spatial domain techniques , we directly deal with the image pixels. The pixel values are manipulated to achieve desired enhancement. In frequency domain methods, the image is first transferred in to frequency domain. It means that, the Fourier Transform of the image is computed first. All the enhancement operations are performed on the Fourier transform of the image and then the Inverse Fourier transform is performed to get the resultant image. These enhancement operations are performed in order to modify the image brightness, contrast or the distribution of the grey levels. As a consequence the pixel value (intensities) of the output image will be modified according to the transformation function applied on the input values.

Image enhancement is applied in many field where images are ought to be understood and analyzed. For example, analysis of images from satellite, medical image analysis, etc.

BACKGROUND

The aim of image enhancement is to improve the interpretability of information in images for human viewers, or to provide better input for other automated image processing techniques.IE has contributed to research advancement in a various fields. Some of the areas in which IE has wide application are mentioned below.

1. Medical imaging [7], [8], [9], uses IE techniques for reducing noise and sharpening details to improve the visual representation of the image. Since minute details play a critical role in diagnosis and treatment of disease, it is essential to highlight important features while displaying medical images. This makes IE a necessary tool for viewing anatomic areas in MRI, ultrasound and x-rays to name a few.

2. In forensics [10], [11], IE is used for identification, evidence gathering and surveillance. Images obtained from fingerprint detection, security videos analysis and crime scene investigations are enhanced to help in identification of culprits and protection of victims.

3. In atmospheric sciences [12], [13], IE is used to reduce the effects of haze, fog, mist and turbulent weather for meteorological observations. It helps in detecting shape and structure of remote objects in environment sensing [14].

4. Astrophotography faces difficulties due to light and noise pollution that can be minimized by IE [15]. For real time sharpening and contrast enhancement several cameras have in-built IE functions. Moreover, numerous softwares [16], [17],allow editing such images to provide better and vivid results.

5. IE techniques has been used In oceanography the study of images reveals interesting features of water flow,sediment concentration, geomorphology and bathymetric patterns to

name a few.These features are more clearly observable in images that are enhanced to overcome the problem of moving targets, deficiency of light and obscure surroundings.

6. IE techniques when applied on pictures and videos help the visually impaired in reading small print, using computers, television and face recognition [18]. Several studies have been conducted [19], [20], that highlight the need and value of using IE for the visually impaired.

7. Virtual restoration of historic paintings and artifacts [21] often employs the techniques of IE in order to reduce stains and crevices. Color contrast enhancement, sharpening and brightening are just some of the techniques used to make the images vivid. IE is a powerful tool for restorers who can make informed decisions by viewing the results of restoring a painting beforehand. It is equally useful in discerning text from worn-out historic documents [22].

8. E-learning field, IE is used to clarify the contents of chalkboard as viewed on streamed video, it helps students in focusing on the text and improves the content readability [23]. Similarly, collaboration [24] through the whiteboard is facilitated by enhancing the shared data and diminishing artifacts like shadows and blemishes.

9. Numerous other fields including, meteorology, microbiology, biomedicine, bacteriology, climatology, microbiology, law enforcement, etc., benefit from various IE techniques. Basically, these benefits are not limited to professional studies and businesses but extend to the common users who employ IE to cosmetically enhance and correct their images.

Inspired by the use of image enhancement in a multitude of fields, this research aims at using these techniques on x-ray images, where The raw data obtained directly from X-ray acquisition device may yield a relatively poor image quality representation.

RESEARCH PROBLEM

The x-ray image enhancement problems can be classified into three main problems:

(1) X-ray images (especially thorax images) include different regions containing details. Both sharp and soft transitions between the regions and details may exist in all visual spans. When all details are enhanced to the same extent, the relatively significant details cover most of the visual span and prevent the visibility of relatively less significant details.

(2) Since X-ray images are used for diagnostic purpose, the image enhancement must not cause misleading information, making a structure looking more or less significant than it is must be avoided.

(3) Data loss is not desirable in diagnostic images. Therefore, the noise attenuation procedure must not remove any visual information.

Another problem with X-ray (especially thorax) images is the risk of incorporating a priori information about the visual structures of the image for enhancement and denoising purpose. Unlike the common images, X-ray images are rendered volume data and the transitions between the same structures may be smooth or sharp depending on the angle.

RESEARCH QUESTIONS

RQ1: Is it possible to enhance x-ray images without losing important details?

RQ2: Is the proposed methods will lead doctors to get a right diagnosis?

RESEARCH OBJECTIVES

The objectives of the study are:

  • To investigate Image enhancement techniques to improves the qualities of an x-ray image .
  • To propose a new frame work for x-ray images enhancement.
  • To provide noise reduction capabilities, with considerably less blurring by using effective filter which is median filter.
  • To propose method which will increase the sharpening of x-ray images.
  • To design x-ray image enhancement system based on proposed methods for better diagnosis.

SIGNIFICANT OF STUDY

The goal of image enhancement technique is to improve a characteristics and obtain better quality of an image, such that the resulting image is better than the original image.

The enhancement operations have an important potential in obtaining as much easily interpretable diagnostic information as possible with reasonable absorbed doses of ionising radiation. Due to the increasing usage of high precision and resolution images with a limited number of human experts, the computational efficiency of the denoising and enhancement becomes important.

RESEARCH SCOPE

This research focus on enhancement of x-ray images.

The proposed system will work on x-ray images, whereas the x-ray images have many problems in enhancement operation, because, X-ray images are used for diagnostic purpose, the image enhancement must not cause misleading information, making a structure looking more or less significant than it is must be avoided.

In this research a good enhancement method will used for a better quality of x-ray images.

CONTRIBUTION

The raw data obtained directly from X-ray acquisition device may yield a relatively poor image quality representation. For right diagnosis, we will use enhancement techniques to obtain bitter quality images .

RESEARCH METHODOLOGY

Image enhancement is improving the perception of information in images for human viewers and providing better input for other automated image processing techniques. The main objective of image enhancement is to modify features of an image to make it more suitable for a given task. We will introduce a great deal of subjectivity into the choice of image enhancement methods. There exist many techniques that can enhance a digital image without spoiling it.

Proposed method consists of three steps:

1. Apply Contrast Limited Adaptive Histogram Equalization (CLAHE) on original x-ray image.

2. Apply median filter on contrasted image

3. Create Negative of an Image

9.1 Contrast Limited Adaptive Histogram Equalization (CLAHE)

Adaptive histogram equalization is one of a computer image processing technique .It is used to improve contrast in images. CLAHE is different from ordinary histogram equalization in the respect that the adaptive method computes several histograms, each corresponding to a distinct section of the image, and uses them to redistribute the lightness values of the image. Ordinary histogram equalization purely uses a single histogram for an entire image [2].

Adaptive histogram equalization is an image enhancement technique capable of improving an image’s local contrast, bringing out more detail in the image. However, it also can produce significant noise. Contrast limited adaptive histogram equalization is a generalization of adaptive histogram equalization, also known as CLAHE, was developed to address the problem of noise amplification.

The noise problem associated with AHE can be reduced by limiting contrast enhancement specifically in homogeneous areas. These areas can be characterized by a high peak in the histogram associated with the contextual regions since many pixels fall inside the same gray level range. The Contrast Limited Adaptive Histogram Equalization (CLAHE) limits the slope associated with the gray level assignment scheme to prevent saturation. This process is accomplished by allowing only a maximum number of pixels in each of the bins associated with the local histograms. After “clipping” the histogram, the clipped pixels are equally redistributed over the whole histogram to keep the total histogram count identical. The CLAHE process is summarized in Table 1.

The clip limit is defined as a multiple of the average histogram contents and is actually a contrast factor. Setting a very high clip limit basically limits the clipping and the process becomes a standard AHE technique. A clip or contrast factor of one prohibits any contrast enhancement, preserving the original image.

1. Obtain all the inputs:

Image

Number of regions in row and column directions

Number of bins for the histograms used in building image

transform function (dynamic range)

Clip limit for contrast limiting (normalized from 0 to 1)

2. Pre-process the inputs:

Determine real clip limit from the normalized value.

If necessary, pad the image (to even size) before splitting

into regions.

3. Process each contextual region (tile) thus producing gray level

mappings:

Extract a single image region.

Make a histogram for this region using the specified number

of bins.

Clip the histogram using clip limit.

Create a mapping (transformation function) for this region.

4. Interpolate gray level mappings in order to assemble final

CLAHE image:

Extract cluster of four neighboring mapping functions.

Process image region partly overlapping each of the

mapping tiles.

Extract a single pixel, apply four mappings to that pixel, and

interpolate between the results to obtain the output pixel.

Repeat over entire image.

Table 1

9.2 Median Filter

We will use this kind of filter on contrasted x-ray image. In signal processing, it is often desirable to perform some kind of noise reduction on an image or signal. The median filter is one of a nonlinear digital filtering techniques, often used to remove noise from images .Noise reduction is a pre-processing step to improve the results of processing (such as, edge detection on an image). Median filtering is used widely in digital image processing because under certain conditions, it preserves edges whilst removing noise.

The main idea of the median filter is to run through the signal entry by entry, replacing each entry with the median of neighboring entries. The pattern of neighbors is called the “window”, which slides, entry by entry, over the entire signal. For 1D signals, the most obvious window is just the first few preceding and following entries, whereas for 2D (or higher-dimensional) signals such as images, more complex window patterns are possible (such as “box” or “cross” patterns). Note that if the window has an odd number of entries, then the median is simple to define: it is just the middle value after all the entries in the window are sorted numerically. For an even number of entries, there is more than one possible median [2].

Advantages of median filte

  • They provide excellent noise reduction capabilities, with considerably less blurring than linear smoothing filters of similar size.
  • Median filters are particularly effective in the presence of both bipolar and unipolar impulse noise.
  • Median value must be one of the pixel values present in the Neighborhood. So median does not create new unrealistic pixel value.

9.3 unsharp mask

An “unsharp mask” is actually used to sharpen an image, contrary to what its name might lead you to believe. Sharpening can help you emphasize texture and detail, and is critical when post-processing most digital images. Unsharp masks are probably the most common type of sharpening, and can be performed with nearly any image editing software (such as Photoshop). An unsharp mask cannot create additional detail, but it can greatly enhance the appearance of detail by increasing small-scale acutance

The sharpening process works by utilizing a slightly blurred version of the original image.  This is then subtracted away from the original to detect the presence of edges, creating the unsharp mask (effectively a high-pass filter). Contrast is then selectively increased along these edges using this mask– leaving behind a sharper final image.

9.4. ImageJ

ImageJ is a public domain Java image processing and analysis developed at the National Institutes of Health. It runs, either as an online applet or as a downloadable application, on any computer with a Java 1.5 or later virtual machine. Imagej can read many image formats including TIFF, GIF, JPEG,BMP, DICOM, FITS and ‘raw’. It supports ‘stacks’ (and hyperstacks), a series of images that share a single window. It has many tools and menu commands foe easy use. we will use this environment to apply the proposed algorithms on x-ray images.

9.5 Research Database

To evaluate our proposed system ,we need a database. The free database will be take from http://www.imageprocessingplace.com website. It has more than 50,000 x-ray images in different parts of body .the standard of database is JPEG format.

9.6 Evaluation

Almost every X-ray images need to be improved to facilitate access or extract the information important. At the same time this process is sensitive because x-ray images is one the ways for disease diagnosis .adding information to x-ray images leads to the wrong diagnosis. To evaluate our work, after finish the required Implementation will send output data and input data to the experts (doctors and X-rays photographer) .They will give their report on this work.

LITERATURE REVIEW

1. CLASSIFICATION OF IMAGES:

1.1 Intensity Images

An intensity image is a data matrix whose values have been scaled to represent intensities. When the elements of an intensity image are of class unit 8, or class unit 16, they have integer values in the range [0, 255] and [0, 65535].respectively. If the image is of class double, the values are floating-point numbers. Values of scaled, class double intensity images are in the range [0, 1] by convention [25].

1.2 Indexed Images

Array of class logical, unit 8, Unit 16, single, or double whose pixel values are directed indices into a color map. The color map is an m-by-3 array of class double. For single or double arrays, integer values range from [1, p]. For logical, unit8, or unit 16 arrays, values range from [0, p-1]. An indexed image consists of an array and a color map matrix. The pixel values in the array are directed indices into a color map. By convention, this documentation uses the variable name X to refer to the array and map to refer to the color map [25].

1.3 Binary Images

Binary images have a very specific meaning in MATLAB. In a binary image, each pixel

assumes one of only two discrete values: 1 or 0, interpreted as black and white,respectively. A binary image is stored as a logical array. Thus, an array of 0s and 1s

whose values are of data class, say, unit8, is not considered a binary image in MATLAB

[25].

Figure 2. Binary image

1.4 Grayscale Images:

A grayscale image (also called gray-scale, gray scale, or gray-level) is a data matrix whose values represent intensities within some range. MATLAB stores a grayscale image as an individual matrix, with each element of the matrix corresponding to one image pixel. By convention, this documentation uses the variable name I to refer to grayscale images. Array of class unit8, unit16, int16, single, or double whose pixel values. For single or double arrays, values range from [0, 1]. For unit8, values range from [0, 255]. For unit16, values range from [0, 65535]. For int16, values range from [-32768, 32767] [25].

Figure 3. Grayscale image

1.5 True color Images

A true color image is an image in which each pixel is specified by three values one each for the red, blue, and green components of the pixel‘s color. MATLAB store true color images as an m-by-n-by-3 data array that defines red, green, and blue color components

for each individual pixel. True color images do not use a color map. The color of each pixel is determined by the combination of the red, green, and blue intensities stored in each color plane at the pixel‘s location. Graphics file formats store true color images as 24-bit images, where the red, green, and blue components are 8 bits each. This yields a potential of 16 million colors. The precision with which a real-life image can be replicated has led to the commonly used term true color image [25].

Figure 4. Color Image.

2. X-Ray images

The very first X ray device was discovered accidentally by the Germanscientist Wilhelm Röntgen (1845-1923) in 1895. He found that a cathode-ray tube emitted certain invisible rays that could penetrate paper and wood and, the first person in the world to see through human flesh, even saw a perfectly clear outline of the bones in his own hand. Röntgen studied these new rays–which he called x rays–for several weeks before publishing his findings in December of 1895. For his great discovery, he was given the honorarytitle of Doctor of Medicine and awarded the 1901 Nobel Prize for physics. Adamant his discovery was free for the benefit of humankind, Röntgen refused to patent it[26].

X rays are waves of electromagnetic energy which behave in much the same way as light rays, but at wavelengths approximately 1000 times shorter than the wavelength of light. X rays can pass uninterrupted through low-densitysubstances such as tissue, whereas higher-density targets reflect or absorb the X rays because there is less space between the atoms for the short waves to pass through. Thus, an x ray image shows dark areas where the rays traveledcompletely through the target (such as with flesh) and light areas where therays were blocked by dense material (such as bone). Following the discoveryof x rays in 1895, this scientific wonder was seized upon by sideshow entertainers who allowed patrons to view their own skeletons and gave them picturesof their own bony hands wearing silhouetted jewelry.

The most important application of the x ray, however, was in medicine, an importance recognized almost immediately after Röntgen’s findings were published. Within weeks of its first demonstration, an x ray machine was used inAmerica to diagnose bone fractures. Thomas Alva Edison invented an x-ray fluoroscope in 1896, which was used by American physiologist Walter Cannon (1871-1945) to observe the movement of barium sulfate through the digestive systemof animals and, eventually, humans. In 1913 the first x-ray tube designed specifically for medical purposes was developed by American chemist William Coolidge. X rays have since become the most reliable method for internal diagnosis.

At the same time, a new science was being founded on the principles introduced by German physicist Max von Laue (1879-1960), who theorized that crystals could be to x rays what diffraction gratings were to visible light. He conducted experiments in which the interference pattern of x rays passing through acrystal were examined; these patterns revealed a great deal about the internal structure of the crystal. William Henry Bragg and his son William LawrenceBragg took this field even farther, developing a system of mathematics that could be used to interpret the interference patterns. This method, known as x-ray crystallography, allowed scientists to study the structures of crystals with unsurpassed precision and is an important tool for scientists, particularly those striving to synthesize chemicals. By analyzing the information within a crystal’s interference pattern, enough can be learned about that substance to create it artificially in a laboratory, and in large quantities. This technique was used to isolate the molecular structures of penicillin, insulin,and DNA.

Modern medical x-ray machines are grouped into two categories: “hard” or “soft” x rays. Soft x rays, which operate at a relatively low frequency, are usedto image bones and internal organs and, unless repeated excessively, cause little tissue damage. Hard x rays, very high frequencies designed to destroy molecules within specific cells thus destroying tissue, are used in radiotherapy, particularly in the treatment of cancer. The high voltage necessary to generate hard x rays is usually produced using cyclotrons or synchrotrons (variations of particle accelerators, or atom smashers).

In 1996, Amorphous silicon x-ray detectors were introduced which produce real-time, high resolution images by converting x-rays into light, the light into electrical signals which are interpreted by a computer, which produces digital data displayed as digital images, which can be enlarged to target aspecific area. Images are filmless and instantly available, formatted for electronic storage and/or transmission. First applied to mammography, this technology reduces radiation, cost of film and storage, and can be used in industrial applications. Also in 1996, researchers at NASA’s Marshall Space FlightCenter developed the high resolution or high brilliance x ray which generates beams 100 times more intense than conventional x rays. These beams can be controlled and focused by reflecting them through tens of thousands of tiny curved capillaries, much as light is directed through fiberoptics.NASA is using this instrument to define the atomic structure of proteins foruse as blueprints in designing drugs. It may also initiate smaller, less expensive, and safer x-ray sources [26].

3. Image enhancement

Image enhancement is concerned with the sharpening of image features such as edge or contrast, and has been employed to improve the visual appearance of images. A variety of image enhancement approaches have been proposed for medical images, such as histogram equalization [27], unsharp masking [28, 29], etc. These approaches can be generally classified into two categories: global and local (adaptive) enhancements. A global enhancement applies a single transform or mapping to all image pixels, but a local

enhancement uses an individual mapping on the local area of a processing pixel. The global enhancement methods may work well for some images, but bad for most images such as non-un

The post Research Proposal: X-ray Images Enhancement appeared first on mynursinghomeworks.

[ad_2]

Source link