Bone Fracture Detection

Project 2

Bone Fracture Detection Using Matlab And Edge Detection 

Aim:

The fracture can occur in any bone of our body like wrist, ankle, hip, rib, leg, chest etc. The Fracture cannot detect easily by the naked eye, so it is seen in the x-ray images. This Project represents the fracture detection of the bone x-ray images. An efficient algorithm is proposed for bone fracture in this project the fractured portion is selected manually.


Image Processing ​: 


Image Processing
 is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it. It is a type of signal processing in which input is an image and output may be image or characteristics/features associated with that image. Nowadays, image processing is among rapidly growing technologies.


INTRODUCTION

X-ray medical imaging plays a vital role in diagnosis of bone fracture in human body. The X-ray image helps the medical practitioners in decision making and effective management of injuries. In order to improve diagnosis results, the stored digital images are further analyzed using medical image processing. The most common ailment of the human bone is fracture. Bone fractures are nothing but the cracks which occur due to accidents. There are many types of bone fractures such as normal, transverse, comminuted, oblique, spiral, segmented, avulsed, impacted, torus and greens. Generally for X-ray image segmentation of bone fractures, a number of edge detection algorithms like sobel, prewitt, Roberts and canny are used. This paper discusses about development of a novel X-ray image segmentation technique for bone fracture detection using the combination of morphology gradient and canny edge detection method.
BONE
Bone is the rigid body tissue consisting of cells embedded in an abundant hard intercellular material. Bones are of different shapes and sizes and they perform many functions inside human body. They support the body structurally, protect our vital organs, and allow us to move. Also, they provide an environment for bone marrow, where the blood cells are created, and they act as a storage area for minerals, particularly calcium. At birth, we have around 270 soft bones. As we grow, some of these fuse. Once we reach adulthood, we have 206 bones. The largest bone in the human body is the thighbone or femur, and the smallest is the stapes in the middle ear, which are just 3 millimeters (mm) long. Bones are mostly made of the protein collagen, which forms a soft framework. The mineral calcium phosphate hardens this framework, giving it strength. More than 99 percent of our body's calcium is held in our bones and teeth.


Figure 1.1: Structure of a bone

1.2             TYPES OF FRACTURES
Bones are rigid, but they do bend or "give" somewhat when an outside force is applied. However, if the force is too great, the bones will break, just as a plastic ruler breaks when it is bent too far.The severity of a fracture usually depends on the force that caused the break. If the bone's breaking point has been exceeded only slightly, then the bone may crack rather than break all the way through. If the force is extreme, such as in an automobile crash or a gunshot, the bone may shatter. If the bone breaks in such a way that bone fragments stick out through the skin, or a wound penetrates down to the broken bone, the fracture is called an "open" fracture. This type of fracture is particularly serious because once the skin is broken, infection in both the wound and the bone can occur.
Common types of fractures include:
·         Transverse fracture (Non Displaced). The broken ends of the bone line up and are barely out of place.

·         Transverse fracture (Displaced). This type of fracture has a horizontal fracture line.

·         Compound fracture. The skin may be pierced by the bone or by a blow that breaks the skin at the time of the fracture. The bone may or may not be visible in the wound.

·         Oblique fracture. This type of fracture has an angled pattern.

·         Comminuted fracture. In this type of fracture, the bone shatters into three or more pieces.
       Greenstick fracture. When the bone bends and cracks but does not fully break. This is commonly seen in children because their bones are softer and more flexible than adults. 
1.3             IMAGING TECHNIQUES

Different techniques are used today to detect bone fractures such as X-Ray, Computed Tomography (CT-scan), Magnetic Resonance Imaging (MRI) and Ultrasound, such techniques are known as imaging techniques. Among these four modalities, X-ray diagnosis is commonly used for fracture detection. However, if the fracture is complicated, a CT scan or MRI may be needed for further diagnosis and operation.

1.3.1    X-RAY

X-rays are a type of radiation called electromagnetic waves. X-ray imaging creates pictures of the inside of your body. The images show the parts of your body in different shades of black and white. This is because different tissues absorb different amounts of radiation. Calcium in bones absorbs x-rays the most, so bones look white. Fat and other soft tissues absorb less and look gray. Air absorbs the least, so lungs look black. The most familiar use of x-rays is checking for fractures (broken bones), but x-rays are also used in other ways. For example, chest x-rays can spot pneumonia. Mammograms use x-rays to look for breast cancer. When you have an x-ray, you may wear a lead apron to protect certain parts of your body. The amount of radiation you get from an x-ray is small. 
1.3.2        COMPUTED TOMOGRAPHY
A computerized tomography scan (CT or CAT scan) uses computers and rotating X-ray machines to create cross-sectional images of the body. These images provide more detailed information than normal X-ray images. They can show the soft tissues, blood vessels, and bones in various parts of the body. A CT scan may be used to visualize the:
  • head
  • shoulders
  • spine
  • heart
  • abdomen
  • knee
  • chest
During a CT scan, you lie in a tunnel-like machine while the inside of the machine rotates and takes a series of X-rays from different angles. These pictures are then sent to a computer, where they’re combined to create images of slices, or cross-sections, of the body.
1.3.3 MAGNETIC RESONANCE IMAGING
Magnetic resonance imaging (MRI) is a type of scan that uses strong magnetic fields and radio waves to produce detailed images of the inside of the body.
An MRI scanner is a large tube that contains powerful magnets. You lie inside the tube during the scan.
An MRI scan can be used to examine almost any part of the body, including the:
•           Brain and spinal cord 
•           Bones and joints
•           Breasts
•           Heart and blood vessels
•           Internal organs, such as the liver, womb or prostate gland
The results of an MRI scan can be used to help diagnose conditions, plan treatments and assess how effective previous treatment has been.
1.3.4    ULTRASOUND
Ultrasound is a type of imaging. It uses high-frequency sound waves to look at organs and structures inside the body. Health care professionals use it to view the heart, blood vessels, kidneys, liver, and other organs. During pregnancy, doctors use ultrasound to view the fetus. Unlike x-rays, ultrasound does not expose you to radiation. During an ultrasound test, you lie on a table. A special technician or doctor moves a device called a transducer over part of your body. The transducer sends out sound waves, which bounce off the tissues inside your body. The transducer also captures the waves that bounce back. The ultrasound machine creates images from the sound waves.
2.1 INPUT X-RAY IMAGE
Input X-ray image or image acquisition is the very first step of the proposed system. Therefore, this work depends on X-ray images to diagnose long bone fractures. The initial step is the image acquisition to get the data in the form of digital X-ray images that are required in this research. Image acquisition can be broadly into the action of retrieving an image from some hardware source. JPG format is used for input X-ray images in this work because this is ease to process in image processing algorithms. Moreover, modern X-ray imaging machine can support the JPG format as well as DICOM format. So, there is no need processing step to convert JPG format from DICOM format.
Figure 2.2: X-Ray


2.2 PREPROCESSING
Preprocessing is common name for operation with image at a lowest level of abstraction both input and output are intensity images. The aim of pre-processing is an improvement of the image data that suppresses unwanted distortions or enhances some image features important for further processing. Preprocessing is an essential stage since it controls the suitability of the results for the successive stages. Image enhancement technique can be used as preprocess or post process portion. Image sharpening refers to any enhancement technique that highlights edges and fine details in an image. The basic concept of Unsharp masking (USM) is to blur the original image first, then subtract the blurred image from the original image itself. As the final stage add the difference result to the original image. In this step, image preprocessing is carried out by the following procedure. The input X-ray image is RGB image. Firstly, this image is converted to gray scale image which is single layer image to speed up the processing time and less computation. Then, gray image is applied by unsharp masking algorithm to emphasize, sharpen or smooth image features for display and analysis and get the edge enhancement image. Undesired effects can be reduced by using a mask to only apply sharpening to desired regions, sometimes termed “smart sharpen” according the three setting control of unsharp masking. They are amount for how much darker and how much lighter the edge borders become, radius for size of the edges to be enhanced or how wide the edge rims become and threshold for minimal brightness change that will be sharpened. Result enhanced image is used for feature extraction step. The flow chart for preprocessing is shown below. 
2.3 EDGE DETECTION
 Edge Detection Edge detection is an important operation in image processing, that reduce the number of pixels and save the structure of the image by determining the boundaries of objects in the image. Edge detection is the method of identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges. There are two general approaches to edge detection that are commonly used are: gradient and Laplacian. Gradient method use the first derivative of the image, and the Laplacian method use the second derivative of the image to find edges. In our method use sobel edge detector and it is a gradient family. 
Figure 2.4:  (a) Original image. (b) Edge detected by Canny edge detector. (c) Portion enclosed is the edge detected by modified Canny's edge detection algorithm

2.3.1 SOBEL EDGE DETECTION
The sobel is one of the most commonly used edge detectors. It is based on convolving the image with a small, separable, and integer valued filter in horizontal and vertical direction and is therefore relatively inexpensive in terms of computations. The Sobel edge enhancement filter has the advantage of providing differentiating (which gives the edge response) and smoothing (which reduces noise) concurrently. 
Compared to other edge operator, Sobel has two main advantages: CD since the introduction of the average factor, it has some smoothing effect to the random noise of the image.
Because it is the differential of two rows or two columns, so the elements of the edge on both sides has been enhanced, so that the edge seems thick and bright. In the airspace, edge detection is usually carried out by using the local operator. What we usually use are orthogonal gradient operator, directional differential operator and some other operators relevant to second-order differential operator.
Sobel operator is a kind of orthogonal gradient operator. Gradient corresponds to first derivative, and gradient operator is a derivative operator. For a continuous function f
(x, y), in the position (x, y), its gradient can be expressed as a vector (the two components are two first derivatives which are along the X and Y direction respectively).


2.3.2 PREWITT EDGE DETECTION

The Prewitt operator is used in image processing, particularly within edge detection algorithms. Technically, it is a discrete differentiation operator, computing an approximation of the gradient of the image intensity function. At each point in the image, the result of the Prewitt operator is either the corresponding gradient vector or the norm of this vector. The Prewitt operator is based on convolving the image with a small, separable, and integer valued filter in horizontal and vertical directions and is therefore relatively inexpensive in terms of computations like Sobel and Kayyali operators. On the other hand, the gradient approximation which it produces is relatively crude, in particular for high frequency variations in the image. The Prewitt operator was developed by Judith M. S. Prewitt.


in simple terms, the operator calculates the gradient of the image intensity at each point, giving the direction of the largest possible increase from light to dark and the rate of change in that direction. The result therefore shows how "abruptly" or "smoothly" the image changes at that point, and therefore how likely it is that part of the image represents an edge, as well as how that edge is likely to be oriented. In practice, the magnitude (likelihood of an edge) calculation is more reliable and easier to interpret than the direction calculation.
Mathematically, the gradient of a two-variable function (here the image intensity function) is at each image point a 2D vector with the components given by the derivatives in the horizontal and vertical directions. At each image point, the gradient vector points in the direction of largest possible intensity increase, and the length of the gradient vector corresponds to the rate of change in that direction. This implies that the result of the Prewitt operator at an image point which is in a region of constant image intensity is a zero vector and at a point on an edge is a vector which points across the edge, from darker to brighter values.
Mathematically, the operator uses two 3×3 kernels which are convolved with the original image to calculate approximations of the derivatives - one for horizontal changes, and one for vertical. If we define {\displaystyle \mathbf {A} } as the source image, and {\displaystyle \mathbf {G_{x}} }{\displaystyle \mathbf {G_{y}} }are two images which at each point contain the horizontal and vertical derivative approximations, the latter are computed as:
{\displaystyle \mathbf {G_{x}} ={\begin{bmatrix}+1&0&-1\\+1&0&-1\\+1&0&-1\end{bmatrix}}*\mathbf {A} \quad {\mbox{and}}\quad \mathbf {G_{y}} ={\begin{bmatrix}+1&+1&+1\\0&0&0\\-1&-1&-1\end{bmatrix}}*\mathbf {A} }Where {\displaystyle *} here denotes the 1-dimensional convolution operation.
Since the Prewitt kernels can be decomposed as the products of an averaging and a differentiation kernel, they compute the gradient with smoothing. Therefore, it is a separable filter.
The x-coordinate is defined here as increasing in the "left"-direction, and the y-coordinate is defined as increasing in the "up"-direction. At each point in the image, the resulting gradient approximations can be combined to give the gradient magnitude.
Prewitt operator is similar to the Sobel operator and is used for detecting vertical and horizontal edges in images. However, unlike the Sobel, this operator does not place any emphasis on the pixels that are closer to the center of the mask.



2.4 SEGMENTATION
 Segmentation is the process of dividing the given image into regions homogenous with respect to certain features as color, intensity etc. It is an essential step in image analysis and locates object & boundaries (lines, curves etc.). The K-means clustering technique is used in this work. The purpose of this algorithm is minimizing an objective function, which is absolute difference function. In this algorithm distance is squared or absolute difference between a pixel and cluster center is calculated. The difference is typically based on pixel intensity, color, texture and location. The quality of the solution depends on the initial set of clusters and value of k. After the segmentation crop the image and the area of fracture with some limitation.
Figure 2.5: Segmentation
2.5 FEATURE EXTRACTION

A feature is an image characteristic that can capture certain visual property of image. Since, sharpened image can increase the contrast between bright and dark region, feature extraction step can directly conduct to bring out features. Corner detection is a technique to extract certain kind of features. A corner can also be defined as the junction of two edges. When bones were broken, bone pieces will appear in the form of corner points between the bright and dark region in the X-ray image.
Feature extraction a type of dimensionality reduction that efficiently represents interesting parts of an image as a compact feature vector. This approach is useful when image sizes are large and a reduced feature representation is required to quickly complete tasks such as image matching and retrieval. Feature detection, feature extraction, and matching are often combined to solve common computer vision problems such as object detection and recognition, content-based image retrieval, face detection and recognition, and texture classification.

Figure 2.6: Detecting an object (left) in a cluttered scene (right) using a combination feature detection, feature extraction, and matching.

2.6 CLASSIFICATION
Classification is a step of data analysis to study a set of data and categorize them into a number of categories. Each category has its own characteristics and the data that belong to such category have the same properties of this category. The fracture detection techniques proposed can be loosely categorized into classification-based and transform-based. In this project, classification-based approach, the last step of the system, is conducted to complete the recognition of the bone fracture in X-ray image. Classification is a phase of information analysis to learn a set of data and categorize them into a number of categories. It also includes a broad range of decision-theoretic approaches to the identification of images. Moreover, Classification can be thought of as two conditions which are binary classification and multiclass classification. In binary classification, a better understood task, only two classes are involved, whereas multiclass classification involves assigning an object to one of several classes. In Neural Network, its classification is fast but its training can be very slow and it requires high processing time for large neural network. SVM classification can avoid under-fitting and over-fitting, however, it will give poor performance when the number of features is very much greater than the number of samples. Among them, Decision Tree (DT) and KNN classifier are applied in this work. DT is a very efficient model and it can produce accurate and easy to understand model in short time. A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. They are used in many different disciplines including diagnosis, cognitive science, artificial intelligence, game theory, engineering and data mining. In this work, the system needs to make simple decision whether fracture bone condition or normal bone condition. Therefore, the DT is applied for these two conditions. DT classifier makes the decision that is if there is one or more corner point, it is a fracture bone condition whereas there is no corner point, this condition is normal bone. K-Nearest Neighbor can have excellent performance for arbitrary class.
2.6.1 DIFFERENT CLASSIFIER
Supervised Classification: when the classes are already defined for the training sets then the classification is known as supervised classification.
Unsupervised Classification: In unsupervised Classification techniques, the classes are not defined to training sets, there are undefined classes.
For classification module, different classifiers are employed to classify the unknown testing instances of various ultrasonic classes based on the training instances. The main classifiers used for classification is k-NN, PNN, SVM, SSVM and ANN etc. In order to avoid any bias by unbalanced features, Min-Max normalization procedure is used to normalize the extracted features.
K-Nearest Neighbor: It is based on the idea of estimating the class of an unknown instance form its neighbors. The basic principle behind k-NN is the assumption of the feature vector lying close to each other belongs to the same class. Because of it tries to group the instances of feature vector into same classes lying close to each other. In the training dataset, by looking among k-nearest the class of an unknown instance is selected. The main advantage k-NN is its ability to handle multiple class problems and it is also robust to the noisy data problem because it averages the k-nearest neighbors. The various distance metrics can be used to calculate say Euclidean distance, Cosine distance, city block, Minkowski, Chebychev and correlation. The Euclidean distance is used as distance metric in this classification module. The value of k is key factor in k-NN classifier as classification performance of k-NN is depends on the value of k.
Artificial Neural Network: The ANN are the computational model which are based on the large collection of simple neural network units. Each neural unit is connected with many other units and links can enhance the activation state of adjoining neural units. By using summation function, each neural unit can be computed. The ANN is combination of many artificial neurons that are linked together according to a specific architecture of network. Its prime goal is to transform inputs into meaningful outputs. The ANN is used to achieve the goal to controlling the movement of a robot based on self- perception and other information. These systems are self-learning and trained and work in the areas where the feature detection is difficult to extract.
Support Vector Machine: The SVM classifier comes under the class of supervised learning machine and works on the basis of statistical theory. SVM classifier can classify both linear and non-linear classification. With the help of the available training data it creates the hyper plane between the classes which results in good separation achievement intuitively but the sets that are available to discriminate are not linearly separable in the space. In non-linear classification module, the data is mapped from input space to higher dimensional feature space by using the input data which is mapped into the kernel function. The Gaussian radial basis function has been used for classification of the data. Present algorithms occurs the sub gradient and coordinate descent methods that have a big advantage of having large and sparse datasets. 
[1]Bone Fracture Detection Using Morphological Gradient Based Image Segmentation Technique-Medical X-ray imaging has wider acceptance in computer aided clinical diagnosis.  Computer aided bone fracture detection technique is mainly implemented to assist doctors to provide better diagnosis report. Bone fracture can occur in any part of human body such as the leg (tibia and fibula), hand (radius and ulna) and foot etc. This paper mainly discusses the computer aided diagnosis of radius bone fracture detection in X-ray images. Accurate bone structure and fracture detection is achieved using a novel morphological gradient based edge detection technique, in which canny edge detection is applied after finding morphology gradient.  The morphological gradient technique removes noise, enhances image details and highlights the fracture region. The fracture edges are more prominently revealed due to the combined effect of morphological gradient technique and canny edge detection algorithm. The processed image output show that the proposed technique provides efficient fracture detection when compared with other edge detection methods.
 [2]The proposed work presents a novel morphology gradient based image segmentation algorithm is proposed to detect the radius bone fracture edges. Bone structure and fracture edges are detected more accurately using proposed image segmentation method compared with other edge detection techniques like sobel, prewitt and canny. Here, the morphological gradient image clearly highlights the sharp gray level transition occurring in the fracture region. 
Cephas Paul Edward, Hilda Hepzibah.
[3]The proposed work present an approach to detect fractures and the type is proposed. When X-ray images are examined manually, it’s a time consuming process and also it is prone to errors. And so there is a great need for the development of automated techniques and methods to verify the presence or absence of fractures.
BFD IMPLEMENTATION CODE:-
Algorithm:
GUI(Graphical User Interface) Code:-
function varargout = GUI(varargin)
% GUI MATLAB code for GUI.fig
%      GUI, by itself, creates a new GUI or raises the existing
%      singleton*.
%      H = GUI returns the handle to a new GUI or the handle to
%      the existing singleton*.
%
%      GUI('CALLBACK',hObject,eventData,handles,...) calls the local
%      function named CALLBACK in GUI.M with the given input arguments.
%
%      GUI('Property','Value',...) creates a new GUI or raises the
%      existing singleton*.  Starting from the left, property value pairs are
%      applied to the GUI before GUI_OpeningFcn gets called.  An
%      unrecognized property name or invalid value makes property application
%      stop.  All inputs are passed to GUI_OpeningFcn via varargin.
%
%      *See GUI Options on GUIDE's Tools menu.  Choose "GUI allows only one
%      instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES
% Edit the above text to modify the response to help GUI
% Last Modified by GUIDE v2.5 22-Dec-2015 10:26:59
% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',       mfilename, ...
                   'gui_Singleton',  gui_Singleton, ...
                   'gui_OpeningFcn', @GUI_OpeningFcn, ...
                   'gui_OutputFcn',  @GUI_OutputFcn, ...
                   'gui_LayoutFcn',  [] , ...
                   'gui_Callback',   []);
if nargin && ischar(varargin{1})
    gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
    [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
    gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT


% --- Executes just before GUI is made visible.
function GUI_OpeningFcn(hObject, eventdata, handles, varargin)

% This function has no output args, see OutputFcn.
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
% varargin   command line arguments to GUI (see VARARGIN)

% Choose default command line output for GUI
handles.output = hObject;

% Update handles structure
guidata(hObject, handles);

% UIWAIT makes GUI wait for user response (see UIRESUME)
% uiwait(handles.figure1);
% --- Outputs from this function are returned to the command line.
function varargout = GUI_OutputFcn(hObject, eventdata, handles)
% varargout  cell array for returning output args (see VARARGOUT);
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure
varargout{1} = handles.output;


% --- Executes on button press in pushbutton1.
function pushbutton1_Callback(hObject, eventdata, handles)
[filename,pathname] = imgetfile();
global myimg1;
myimg1=imread(filename);
%img{i} = imread(list(i).name);
 if pathname
     msgbox(sprintf('Error'),'Error','Error');
     return
 end
 I = imread('image.jpg');
grayimage=rgb2gray(I);
%ad=imadjust(grayimage,[0.1,0.9],[0.0,1.0]);
ad=imadjust(grayimage);
%filtered2=fspecial('average',[3 3]);
%filtered=imfilter(grayimage,filtered2);
filtered=imnoise(ad,'salt & pepper',0);
%sharpen= imsharpen(I);
%contrast=imcontrast(filtered);
%pixel_avg=mean(filtered(:));
%perprocessed=pixel_avg-grayimage;
SE=strel('disk',3);
%filtered=imopen(grayimage,SE);
erosion=imerode(filtered,SE);
dilation=imdilate(filtered,SE);
diff=dilation-erosion;
%image_dilate_diff=dilation-grayimage;
%image_erode_diff=erosion-grayimage;
gradient_image=filtered-diff;
SE2=strel('diamond',3);
new_dilation=imdilate(gradient_image,SE2);
axes(handles.axes1);
imshow(I);
c=edge(new_dilation,'sobel');
axes(handles.axes2);
imshow(c);
c2=edge(new_dilation,'prewitt');
axes(handles.axes3);
imshow(c2);

c3=edge(new_dilation,'canny',0.2);
axes(handles.axes4);
imshow(c3);

      Algorithm:
 SOBEL ,PREWITT,CANNY EDGE DETECTION CODE:-
I = imread('image.jpg');
grayimage=rgb2gray(I);
%ad=imadjust(grayimage,[0.1,0.9],[0.0,1.0]);
ad=imadjust(grayimage);
%filtered2=fspecial('average',[3 3]);
%filtered=imfilter(grayimage,filtered2);
filtered=imnoise(ad,'salt & pepper',0);
%sharpen= imsharpen(I);
%contrast=imcontrast(filtered);
%pixel_avg=mean(filtered(:));
%perprocessed=pixel_avg-grayimage;
SE=strel('disk',3);
%filtered=imopen(grayimage,SE);
erosion=imerode(filtered,SE);
dilation=imdilate(filtered,SE);
diff=dilation-erosion;
%image_dilate_diff=dilation-grayimage;
%image_erode_diff=erosion-grayimage;
gradient_image=filtered-diff;
SE2=strel('diamond',3);
new_dilation=imdilate(gradient_image,SE2);
c=edge(new_dilation,'sobel');
figure(1);
imshow(c);
c2=edge(new_dilation,'prewitt');
figure(2);
imshow(c2);
c3=edge(new_dilation,'canny',0.2);
figure(3);
imshow(c3);

Comments