Computer a pixel’s Blue color component.l – Number

Computer vision projectPhoto editing program.Ahmad Beilouni 21737As stated in the proposal, my project aim is to design a software that does as manytransformations, filters and changes as possible on any given input image.However, since I am doing this project alone , I couldn’t achieve a large number oftransformations as stated in the proposals because I simply underestimated themeaning of designing a complete software that you have to try and catch manythings in it. Nevertheless, I could do 16 of them in an efficient way. (for instance : ifwe were three people the number of the transformations achieved would be 16*3 =48).In the this part I will mention the things that were done in an efficient way:1- Oil painting filter :The Image Oil Painting Filter consists of two main components:color gradients and pixel color intensities. As implied by the title whenimplementing this image filter resulting images are similar in appearanceto images of Oil Paintings. Result images express a lesser degree of image detailwhen compared to source/input images. This filter also tends to output imageswhich appear to have smaller color ranges.Practically, in order to implement this we need to follow these four steps :A – Iterating each pixel – Every pixel forming part of the source/input image shouldbe iterated. When iterating a pixel determine the neighboring pixel values based onthe specified filter size/filter range.B – Calculate Color Intensity – Determine the Color Intensity of each pixel beingiterated and that of the neighboring pixels. The neighboring pixels included shouldextend to a range determined by the Filter Size specified. The calculated valueshould be reduced in order to match a value ranging from zero to the numberof Intensity Levels specified.Where the intensity level is determined by the following formula :I – Intensity: The calculated intensity value.R – Red: The value of a pixel’s Red color component.G – Green: The value of a pixel’s Green color component.B – Blue: The value of a pixel’s Blue color component.l – Number of intensity levels: The maximum number of intensity levels specified.C – Determine maximum neighborhood color intensity – When calculating the colorintensities of a pixel neighborhood determine the maximum intensity value. Inaddition, record the occurrence of each intensity level and sum each ofthe Red, Green and Blue pixel color component values equating to the sameintensity level. This is given as input from the user, the more intensity level wechoose the more quality we earn (the good one is around 25).D – Assign the result pixel – The value assigned to the corresponding pixel in theresulting image equates to the pixel colour sum total, where those pixels expressedthe same intensity level. The sum total should be averaged by dividing the coloursum total by the intensity level occurrence.Let’s see what happens if we choose a small filter value such as 9 :Let’s see if we make them large enough such as 15 :As we can see there is a big difference in terms of the details between the 9 filter size and the 15 filter sizein which the 15 one causes to lose more details in the image(It is exactly if we are using a wide brush) . Forthe time needed to execute, the more the filter size we choose, the more time it needs to process theneighbours and sort them. However, it takes approximately 7 seconds to execute an image that we specifyit’s filter size as 15 and around 0.89 seconds for the 3 filter size.2- Cartoon painting filter :A Cartoon Filter effect can be achieved by combining an Image Oil Paintingfilter and an Edge Detection Filter. The Oil Painting filter has the effect of creatingmore gradual image colour gradients, in other words reducing image edge intensity.The steps required in implementing a Cartoon filter can be listed as follows:Apply Oil Painting filter – Applying an Oil Painting Filter creates the perception ofresult images having been painted by hand.Implement Edge Detection – Using the original source/input image create a newbinary image detailing image edges.Overlay edges on Oil Painting image – Iterate each pixel forming part of the edgedetected image. If the pixel being iterated forms part of an edge, the related pixel inthe Oil Painting filtered image should be set to black. Because the edgedetected image was created as a binary image, a pixel forms part of an edge shouldthat pixel equate to white.In order to detect the edge in a perfect way , I used Gradient Based EdgeDetection . Practically, In order to apply this edge algorithm we use thefollowing steps :A – Iterating each pixel – Each pixel forming part of a source/input image should beiterated.B – Determine Horizontal and Vertical Gradients – Calculate the colour valuedifference between the currently iterated pixel’s left and rightneighbour pixel aswell as the top and bottom neighbour pixel. If the gradient exceeds the specifiedthreshold continue to step 8.C – Determine Horizontal Gradient – Calculate the colour value difference betweenthe currently iterated pixel’s left and right neighbour pixel. If the gradient exceedsthe specified threshold continue to step 8.D – Determine Vertical Gradient – Calculate the colour value difference betweenthe currently iterated pixel’s top and bottom neighbour pixel. If the gradientexceeds the specified threshold continue to step 8.E – Determine Diagonal Gradients – Calculate the colour value difference betweenthe currently iterated pixel’s North-Western and South-Easternneighbour pixel aswell as the North-Eastern and South-Westernneighbour pixel. If the gradientexceeds the specified threshold continue to step 8.F – Determine NW-SE Gradient – Calculate the colour value difference between thecurrently iterated pixel’s North-Western and South-Easternneighbour pixel. If thegradient exceeds the specified threshold continue to step 8.G – Determine NE-SW Gradient – Calculate the colour value difference between thecurrently iterated pixel’s North-Eastern and South-Westernneighbour pixel.H – Determine and set result pixel value – If any of the six gradients calculatedexceeded the specified threshold value set the related pixel in the resulting imageto white, if not, set the related pixel to black.As we know for an edge detection to be implemented, we need a threshold to start with , let’s see thefollowing picture :The threshold here is 97 , let’s try with 165 value :We see that less details were remarked as an edge.3 – Min-Max edge detection 😕 Min/Max Edge Detection algorithm performs pixelneighborhood inspection, comparing maximum and minimumcolour channel values. Should the difference between maximumand minimum colour values be significant, it would be an indicationof a significant change in gradient level within the pixelneighbourhood being inspected.? The Min/Max Edge Detection algorithm makes provision foroptional image smoothing implemented in the form ofa median filter.? Image noise represents interference in relation to regular gradientlevel expression. Image noise does not signal the presence ofan image edge, although could potentially result in incorrectlydetermining image edge presence. Image noise and the negativeimpact thereof can be significantly reduced when applying imagesmoothing, also sometimes referred to as image blur.There 5 steps in order to accomplish that 😕 Image Noise Reduction – If image noise reduction is required applya median filter to the source image.? Iterate through all of the pixels contained within an image.? For each pixel being iterated, determine the neighbouring pixels.The pixel neighbourhood size will be determined by the specifiedfilter size.? Determine the Minimum and Maximum pixel value expressedwithin the pixel neighbourhood.? Subtract the Minimum from the Maximum value and assign theresult to the pixel currently being iterated.Apply Grayscale conversion to the pixel currently being iterated, onlyif grayscale conversion had been configured .Let’s try to apply this edge detection with a filter size of 3 (medianfilter where it’s minimum value is substracted from it is maximumvalue after sorting them), which gives of caurse better result in termsof the quality and covering the details :Let’s try to apply grayscale on it :It is obvious that the result obtained is very similar to the results weused to obtain in the homework (especially canny edge detectingalgo).3- Bitonal filter :What does this filter is basically express each pixel value in the image interms of one of two pixel values chosen by the user. This is donebasically by determining a threshold to start at when separating.The Bi-tonal Bitmap application enables the user to load an input imagefile from the local file system. The user interface defines two panelsrepresenting the two colours used when creating the resulting bitonalBitmap. When clicking on either panel the user will be presentedwith a colour dialog, allowing the user to change the colour of the specificpanel.the Bitonal extension manipulates pixel colour components directly, inother words updating a pixel’s Alpha, Red, Green and Blue valuesdirectly.It is worth saying that this transformation is applied by those who wantto design a nice covers for some books ?I did not add the code of these transformations because it is about4000 lines of code in total, and each main class has several subclassesthat does many other functions. You can look at the code in the projectattached at Sucourse.4 – Cropping :This dpne basically by assigning the pixels to chosen by the users to ananother array of array, and then assigning them to ana another image :There are also other trans that I stated them in the progress report :The first transformation that comes to the mind is converting the current imageto grayscale image, but the critical question how this actually happen. TheGrayscale image is an image that each pixel of it has a value between 0-255 ,meaning that it is an 8bit image . A nice algo would be to sum the rgb values ofeach pixel and divide it by three and assigning it again to the targeted pixel. Thiscode in C# does it for us :for (int y = 0; y < bt.Height; y++){for (int x = 0; x < bt.Width; x++){Color c = bt.GetPixel(x, y);int r = c.R;int g = c.G;int b = c.B;int avg = (r + g + b) / 3;bt.SetPixel(x, y, Color.FromArgb(avg, avg, avg));}}Where bt is the targeted image.Another important transformation is the rotation.The first transformation that comes to the mind is converting the current imageto grayscale image, but the critical question how this actually happen.TheGrayscale image is an image that each pixel of it has a value between 0-255, meaning that it is an 8bit image . A nice algo would be to sum the rgb values ofeach pixel and divide it by three and assigning it again to the targeted pixel.This code in C# does it for us :for (int y = 0; y < bt.Height; y++){for (int x = 0; x < bt.Width; x++){Color c = bt.GetPixel(x, y);int r = c.R;int g = c.G;int b = c.B;int avg = (r + g + b) / 3;bt.SetPixel(x, y, Color.FromArgb(avg, avg, avg));}}Where bt is the targeted image.Another important transformation is the rotation.The rotation here is done via sending an image to the Matlab, where a function isdone there and is able to rotate any given photo by a O degreeLet's try to rotate it by 90 degree and 30 for example :This operation is done via linking The Matlab with C# where , the functionimplemented in Matlab can rotate any given image by any given degree func c =Rotate(Image,angle) ;Another feature is creating an RGB channel for the image and showing the redscale, blue scale, green scale of it . What is basically done here is an imageenhancement.This basically done by creating some zero matrices and concatenating them withother matrices depending the Chanelle that we want to have or thecorresponding grayscale here is the C# implementation it :float changered = redbar.Value * 1f;float changegreen = greenbar.Value * 1f;float changeblue = bluebar.Value * 1f;ImageAttributes ia = new ImageAttributes();ColorMatrix cmPicture = new ColorMatrix(new float{});new float{1+changered, 0, 0, 0, 0},new float{0, 1+changegreen, 0, 0, 0},new float{0, 0, 1+changeblue, 0, 0},new float{0, 0, 0, 1, 0},new float{0, 0, 0, 0, 1}Where f is just to convert to a floating value , as we can see in the color matrixwhere concatenating the color cheneelle matrices, where the changered,changblue , change green depends on the value of the trackbar as the following :Also I could implement some filters on the images given,and one of the most famous filters to be implemented onany image is the Gaussian filter. The basic idea behindGaussian filter is it blurs an image by adding weightedgrayscales of surrounding pixels on to the pixel to beblurred. The center pixel will have biggest weight, andthe surrounding pixels will have lesser and lesser weightas their distances to the center increase. The table ofweights are calculated using Gaussian function, which is: G(x,y) = 1/2?£2 ex^2 + y^2/1£2 , let's say that wewant to apply a three by three filter operation on ourimage then it is worth saying that Applying G(x) and G(y)separately saves lots of calculation because 1D Gaussiankernel is a 1D array, not a nXn matrix. The processincludes the following :Creating the NormalizeMatrix , GaussianConvolution ,Processing the Points so the image becomes as thefollowing : Calculate1DSampleKernel ,CalculateNormalized1DSampleKernelAnother transformation is getting the negative of anyimage , this basically done substracting the value ofeach RGB in each pixel by 255 and assigning them againto the image :One more transformation is Convert any colorful image into Sepia. Thisprocess has some steps that needs to br followed : First we need get the RGBvalue of each pixel , then we calculate the terms tr,tb,tg which are the result ofthe sum of RGB value of the pixel mulitiplied by some small numbers that areless than one , and then rounding them . and then Setting the new RGB valueof the pixel as per the following condition:If tr > 255 then r = 255 else r = trIf tg > 255 then g = 255 else g = tgIf tb > 255 then b = 255 else b = tbExample:A = 255R = 100G = 150B = 200Where A, R, G and B represents the Alpha, Red, Green and Blue value of thepixel.Remember! ARGB will have an integer value in the range 0 to 255.So, to convert the color pixel into sepia pixel we have to first calculate tr, tg andso we assign these values again to the pixel and we have the following :tr = 0.393(100) + 0.769(150) + 0.189(200)tr = 192.45tr = 192 (after rounding)As a final thing, it is worth mentioning that I tried to achieve some thing that isvery innovative which is trying to link Matlab with C# . As a matter of fact Imanaged to do that, but the problem was optimizing this thing and here how I didit :After writing a matlab function , I deployed it using deploy tool on commandwindow, then I deploy the function and convert it to dll form (for the C# library),and then I can call it inside the C#, but the problem was converting the image fromC# form to the matlab form (array of array of bytes) and from the matlab form(array of array of bytes) to the C# one (bitmap), which it took exactly 21 seconds todo it. This of course regardless of the time required to do the operation in matlab.That’s why I couldn’t depend on it much.In sum, This software could be developed more if I was to have more programmersworking with me and in the future I hope that I can work on developing it and addmore filters and add more features to it.Refrences :Accessing MATLAB Functions from C#.NET.” Domoreinlesstime, 26 Jan. 2013,domoreinlesstime.wordpress.com/2013/01/26/access-matlab-from-c/.The Supercomputing Blog. (2017). Oil Painting Algorithm. online Available at:http://supercomputingblog.com/graphics/oil-painting-algorithm/ Accessed 29 Dec. 2017.Mathworks.com. (2017). Measure properties of image regions – MATLAB regionprops – MathWorksUnited Kingdom. online Available at:https://www.mathworks.com/help/images/ref/regionprops.html#buorh68-1 Accessed 29 Dec.2017.domoreinlesstime. (2018). Accessing MATLAB functions from C#.NET. online Available at:https://domoreinlesstime.wordpress.com/2013/01/26/access-matlab-from-c/ Accessed 2 Jan. 2018.

x

Hi!
I'm Freda!

Would you like to get a custom essay? How about receiving a customized one?

Check it out