Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • samanost/sicom_image_analysis_project
  • gerayelk/sicom_image_analysis_project
  • jelassiy/sicom_image_analysis_project
  • chardoto/sicom_image_analysis_project
  • chaarim/sicom_image_analysis_project
  • domers/sicom_image_analysis_project
  • elmurrt/sicom_image_analysis_project
  • sadonest/sicom_image_analysis_project
  • kouddann/sicom_image_analysis_project
  • mirabitj/sicom-image-analysis-project-mirabito
  • plotj/sicom_image_analysis_project
  • torrem/sicom-image-analysis-project-maxime-torre
  • dzike/sicom_image_analysis_project
  • daip/sicom_image_analysis_project
  • casanovv/sicom_image_analysis_project
  • girmarti/sicom_image_analysis_project
  • lioretn/sicom_image_analysis_project
  • lemoinje/sicom_image_analysis_project
  • ouahmanf/sicom_image_analysis_project
  • vouilloa/sicom_image_analysis_project
  • diopb/sicom_image_analysis_project
  • davidale/sicom_image_analysis_project
  • enza/sicom_image_analysis_project
  • conversb/sicom_image_analysis_project
  • mullemat/sicom_image_analysis_project
25 results
Show changes
Commits on Source (175)
Showing
with 1798 additions and 105 deletions
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
As a final exam, this project aims to evaluate 3A SICOM students. As a final exam, this project aims to evaluate 3A SICOM students.
Students will work individually, and each student must submit its work (report and code) on Chamilo by **Tuesday, June 6**. Students will work individually, and each student must submit its work (report and code) as a merge request on Gitlab by **Wednesday, January 31st**.
## 2. Presentation ## 2. Presentation
...@@ -12,111 +12,85 @@ Many commercial RGB cameras use a technology called Color Filters Array (CFA). T ...@@ -12,111 +12,85 @@ Many commercial RGB cameras use a technology called Color Filters Array (CFA). T
![alt text](readme_imgs/patterns.png) ![alt text](readme_imgs/patterns.png)
Because of the filters the pixels on the sensor will only acquire one color (being red, green or blue). This leads to a raw acquisition that is a gray-scale image. See example bellow: Because of the filters the pixels on the sensor will only acquire one color (being red, green or blue). This leads to a raw acquisition that is a gray-scale image. Example:
![alt text](readme_imgs/example.png) ![alt text](readme_imgs/example.png)
The goal of this project is to perform *demosaicking*: recover all the missing colors for each pixel. This way we will recover the full RGB image. The goal of this project is to perform *demosaicking*: recover all the missing colors for each pixel. This way we will recover the full RGB image.
We propose you to reconstruct 4 images (you can find them in the `images` folder of this project). These images are from the open dataset of the **National Gallery of Art**, USA, which can be found [here](https://github.com/NationalGalleryOfArt/opendata). To reconstruct these images we provide you the forward operator, modeling the effect of a CFA camera. This operation is described in the file `src/forward_operator.py`. We propose you to reconstruct 4 images (you can find them in the `images` folder of this project). These images are from the open dataset of the **National Gallery of Art**, USA, which can be found [here](https://github.com/NationalGalleryOfArt/opendata). To reconstruct these images we provide you the forward operator, modeling the effect of a CFA camera with 2 different CFA patterns: either Bayer of Quad Bayer pattern. This operation is described in `src/forward_operator.py`.
## 3. How to proceed? ## 3. How to proceed?
All of the images to reconstruct have the same size (1024x1024) and are RGB. The patches to use are all same sized and squared (32x32) and are RGB too. To achieve this project you have to: All the images to reconstruct have the same size (1024x1024) and are RGB. You have to use methods seen during this semester to recover a fully colored RGB image using **only** the raw acquisition (gray-scale image) and the forward operator (but you are not forced to use it). Of course using directly the ground truth (original image) is forbiden, it can only be used to compute metrics (SSIM, PSNR).
- **a.** Extract all possible pieces (32x32 patches) from the image to reconstruct, let's call a piece from the image to reconstruct a query (*q in the figure below*).
- **b.** Find for each query the patch (*p in the figure below*) that best fit the query with the help of an image processing based measure **that you will build (feel free to search on the internet!)**. The patches from the database must be used only **one time**.
- **c.** Reconstruct the image with the found patches.
![alt text](readme_imgs/img1.png)
## 4. Some hints to start ## 4. Some hints to start
We want you to find a metric that allows to find the patch that best fit a given query. Remember how to compute the distance between two images (BE2, BE3). We provide in `src/methods/baseline` a basic interpolation image which can help you to start. These methods are described in the PhD Thesis _Model Based Signal Processing Techniques for Nonconventional Optical Imaging Systems_ and the user's manual _Pyxalis Image Viewer_ (see references). You can also find in the academic litterature techniques (interpolation, inverse problem, machine learning, etc) that solves problems close to this one.
You can look further and find among these references some other metrics that you can implement to have better results:
- Histogram in image processing [Wikipedia](https://fr.wikipedia.org/wiki/Histogramme_(imagerie_num%C3%A9rique)) *The time of computations might be very long depending of your machine. To verify that your method works, try to work with smaller test images.*
- Mean Square Error [Wikipedia](https://en.wikipedia.org/wiki/Mean_squared_error)
- PSNR [Wikipedia](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio)
- Other measure?
*The time of computations might be very long depending of your machine. To verify that your method works, try to work with a few patches number (look at the source code and the provided documentation with the help() function).*
*Don't forget: images are RGB! The three channels are important!*
## 5. Project repository ## 5. Project repository
The project has the following organization: The project has the following organization:
~~~text ~~~text
image_processing_project/ sicom_image_analysis_project/
├─.gitignore # Ignore this ├─.gitignore # Git's ignore file
├─images/ # All images you need to reconstruct ├─images/ # All images you need to reconstruct
│ ├─img_1.png ├─main.ipynb # A notebook to experiment with the code
│ ├─img_2.png ├─output/ # The output folder where you can save your reconstruction
│ ├─img_3.png ├─README.md # Readme, contains all information about the project
│ └─img_4.png ├─readme_imgs/ # Images for the Readme
├─main.ipynb # The notebook to run all of your computations
├─output/ # The output folder where you can save your reconstruction (ignore .gitkeep)
│ └─.gitkeep
├─patches.npy # All of the patches from ImageNet saved in one file
├─readme.md # ReadMe, contains all information about the project
├─readme_imgs/ # Images for the ReadMe, ignore it
│ ├─img0.jpg
│ └─img1.png
├─requirements.txt # Requirement file for packages installation ├─requirements.txt # Requirement file for packages installation
└─src/ # All of the source files for the project, your source file must appear here! └─src/ # All of the source files for the project
├─demo_reconstructions.py ├─checks.py # File containing some sanity checks
├─reconstruct.py ├─forward_model.py # File containing the CFA operator
└─utils.py ├─utils.py # Some utilities
~~~ └─methods/
├─baseline/ # Example of reconstruction
└─template/ # Template of your project (to be copied)
~~~
## 6. Instructions: ## 6. Instructions:
Each group will submit its work in Chamilo, inside the work **Image_processing_projects_2023**. It must contain: Each student will fork this Git repository on its own account. You will then work in your own version of this repository.
- A report **as a pdf file** with the name **name1_name2_name3.pdf**. Along with your code you must submit a report **as a pdf file** with the name **name.pdf** inside the folder `src/methods/your_name`.
- Your code as a archive (all of the provided project folder named image_processing_folder **except the `patches.npy` file**).
The code and the report must be **written in English**. The code and the report must be **written in English**.
### 6.1. The report: ### 6.1. The code:
Your report **must be a pdf file** written in English. In this file you are asked to explain:
1. The problem statement, show us that you've understood the project. - You should first **copy** the folder `src/methods/template` and rename it with your name. It is in this folder that you will work, **nothing else should be modified** apart from `main.ipynb` for experimenting with the code.
2. The solution that you've chosen, explaining the theory. - You will add as many files you want in this folder but the only interface to run your code is `run_reconstruction` in `src/methods/your_name`. You can modify this function as you please, as long as it takes in argument the image to reconstruct (`y`), the name of the CFA (`cfa`) and returns a demosaicked image.
3. With tools that **you've built** show us that your solution is relevant and working (*hint: maybe try to reconstruct an image with a patch database built with the image itself...*). - Have a look in `src/methods/baseline`, your projet should work in the same way.
4. Results, give us your results of the images `img_1`, `img_2`, `img_3`, `img_4`. - You can use all the functions defined in the project, even the ones that you should not modify (`utils.py`, `forward_model.py`, etc).
5. Conclusion, take a step back about your results. What can be improved? - Your code must be operational through `run_reconstruction`, as we will test it on new and private images.
- The notebook provides a working bench. It should **not be included in the merge request**, it is just a work document for you.
### 6.2. The code:
- Your code must compile and be operational.
- In the notebook, feel free to add cells to show us how your code works and to show us the tools you've developped to evaluate your work.
- Comment your code when needed. - Comment your code when needed.
## 7. Upload the whole project on jupyter hub ### 6.2. The report:
Jupyter hub allows to upload only files and not a folder organization. You can upload the project in one step by uploading a **.zip version** of the project on jupyter hub and unzip it with the python code below (you can write this code in a temporary notebook in jupyter hub): Your report **must be a pdf file** written in English and should not be longer than 5 pages. In this file you are asked to explain:
``import zipfile as zf``<br><br> - The problem statement, show us that you've understood the project.
``files = zf.ZipFile("image_processing_project-master.zip", 'r')``<br> - The solution that you've chosen, explaining the theory.
``files.extractall('.') # Will extract the project at the root of jupyter hub``<br> - With tools that **you've built** show us that your solution is relevant and working
``files.close()``<br> - Results, give us your results of the images `img_1`, `img_2`, `img_3`, `img_4`.
- Conclusion, take a step back about your results. What can be improved?
If the package ``zipfile`` is not available install it with the command:<br> ``!pip install zipfile36`` ### 6.3. Submission:
*(run it in a notebook cell)*.
## 8. References To submit your work you just need to create a merge request in Gitlab (see [here](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html#when-you-work-in-a-fork) for a detailed explanation). This merge request will **only encompass the changes made to `src/methods/your_name`, without `main.ipynb`**. Because the merge request will only create `src/methods/your_name` there will not be any conflicts between each student's project.
- NGA [website](https://www.nga.gov/) ## 7. Supervisors
- ImageNet [website](https://www.image-net.org/index.php)
- Kaggle [website](https://www.kaggle.com/datasets)
## 9. Supervisors
- Matthieu Chancel: matthieu.chancel@gipsa-lab.grenoble-inp.fr
- Mauro Dalla Mura: mauro.dalla-mura@gipsa-lab.grenoble-inp.fr - Mauro Dalla Mura: mauro.dalla-mura@gipsa-lab.grenoble-inp.fr
- Matthieu Muller: matthieu.muller@gipsa-lab.grenoble-inp.fr - Matthieu Muller: matthieu.muller@gipsa-lab.grenoble-inp.fr
- Daniele Picone: daniele.picone@grenoble-inp.fr
## 8. References
- NGA: [website](https://www.nga.gov/)
- Model Based Signal Processing Techniques for Nonconventional Optical Imaging Systems: [website](https://theses.hal.science/tel-03596486)
- Pyxalis: [website](https://pyxalis.com/wp-content/uploads/2021/12/PYX-ImageViewer-User_Guide.pdf)
source diff could not be displayed: it is too large. Options to address this: view the blob.
...@@ -71,25 +71,14 @@ def check_data_range(img: np.ndarray) -> None: ...@@ -71,25 +71,14 @@ def check_data_range(img: np.ndarray) -> None:
raise Exception(f'Pixel\'s values must be in range [0, 1]. Got range [{np.min(img)}, {np.max(img)}].') raise Exception(f'Pixel\'s values must be in range [0, 1]. Got range [{np.min(img)}, {np.max(img)}].')
def check_len(img1_list: list | np.ndarray, img2_list: list | np.ndarray) -> None:
"""Checks if two lists have the same length.
Args:
img1_list (list | np.ndarray): First list.
img2_list (list | np.ndarray): Secind list.
Raises:
Exception: Exception if the two list do not have the same length.
"""
if len(img1_list) != len(img2_list):
raise Exception(f'The two lists must have the same length. Got {len(img1_list)} and {len(img2_list)}.')
def check_cfa(cfa: str) -> None: def check_cfa(cfa: str) -> None:
"""Checks if the CFA's name is correct. """Checks if the CFA's name is correct.
Args: Args:
cfa (str): CFA name. cfa (str): CFA name.
Raises:
Exception: Exception if the name of the CFA is not correct.
""" """
if cfa not in ['bayer', 'quad_bayer']: if cfa not in ['bayer', 'quad_bayer']:
raise Exception(f'Unknown CFA name. Got {cfa} but expected either bayer or quad_bayer.') raise Exception(f'Unknown CFA name. Got {cfa} but expected either bayer or quad_bayer.')
......
File added
from scipy import ndimage
import numpy as np
############################################################
def color_pixel(i,j,cfa = "bayer"):
if (cfa == "quad_bayer"):
i = i//2
j = j//2
if ((i+j)%2==0):
return 'green'
else:
if (i%2==0):
return 'red'
else:
return 'blue'
def rmse_pixel(pixel_raw,pixel_extrapolate):
return np.sqrt(np.mean((pixel_raw-pixel_extrapolate)**2))
######### Method extrapolation with edge detection #########
def compute_orientation_matrix(img_raw):
vertical = ndimage.sobel(img_raw, 0)
horizontal = ndimage.sobel(img_raw, 1)
orientation_matrix = np.zeros(img_raw.shape)
orientation_matrix[vertical < horizontal] = 1
return orientation_matrix
## Green Channel ##
##Formulas for etrapolation of pixels:
def extrapolate_green_top(img_raw,i,j):
return img_raw[i-1,j] + 3/4*(img_raw[i,j]-img_raw[i-2,j])-1/4*(img_raw[i-1,j]-img_raw[i-3,j])
def extrapolate_green_bottom(img_raw,i,j):
return img_raw[i+1,j] + 3/4*(img_raw[i,j]-img_raw[i+2,j])-1/4*(img_raw[i+1,j]-img_raw[i+3,j])
def extrapolate_green_left(img_raw,i,j):
return img_raw[i,j-1] + 3/4*(img_raw[i,j]-img_raw[i,j-2])-1/4*(img_raw[i,j-1]-img_raw[i,j-3])
def extrapolate_green_right(img_raw,i,j):
return img_raw[i,j+1] + 3/4*(img_raw[i,j]-img_raw[i,j+2])-1/4*(img_raw[i,j+1]-img_raw[i,j+3])
## Extrapolation method:
def median_extrapolate_green_pixel(img_raw,i,j,orientations_to_drop):
list_extrapolate_pixel = []
if ("top" not in orientations_to_drop):
list_extrapolate_pixel.append(extrapolate_green_top(img_raw,i,j))
if ("bottom" not in orientations_to_drop):
list_extrapolate_pixel.append(extrapolate_green_bottom(img_raw,i,j))
if("left" not in orientations_to_drop):
list_extrapolate_pixel.append(extrapolate_green_left(img_raw,i,j))
if("right" not in orientations_to_drop):
list_extrapolate_pixel.append(extrapolate_green_right(img_raw,i,j))
return np.median(list_extrapolate_pixel)
def extrapolate_green_pixel(img_raw,i,j,orientation):
# First the borders:
orientations_to_drop = []
if (i<2):
orientations_to_drop.append('top')
if (i>img_raw.shape[0]-4):
orientations_to_drop.append('bottom')
if (j<2):
orientations_to_drop.append('left')
if (j>img_raw.shape[1]-4):
orientations_to_drop.append('right')
# Then the rest of the image:
else:
if (orientation == 1): # V < H so we gonna eliminate one horizontal pixel.
if ("right" not in orientations_to_drop and "left" not in orientations_to_drop):
rmse_pixel_left = rmse_pixel(img_raw[i,j],extrapolate_green_left(img_raw,i,j))
rmse_pixel_right = rmse_pixel(img_raw[i,j],extrapolate_green_right(img_raw,i,j))
if (rmse_pixel_left > rmse_pixel_right):
orientations_to_drop.append('left')
else:
orientations_to_drop.append('right')
else: # V > H so we gonna eliminate one vertical pixel.
if ("top" not in orientations_to_drop and "bottom" not in orientations_to_drop):
rmse_pixel_top = rmse_pixel(img_raw[i,j],extrapolate_green_top(img_raw,i,j))
rmse_pixel_bottom = rmse_pixel(img_raw[i,j],extrapolate_green_bottom(img_raw,i,j))
if (rmse_pixel_top > rmse_pixel_bottom):
orientations_to_drop.append('top')
else:
orientations_to_drop.append('bottom')
return median_extrapolate_green_pixel(img_raw,i,j,orientations_to_drop)
def extrapolate_green(img_raw,extrapolate_img):
orientation_matrix = compute_orientation_matrix(img_raw)
for i in range(img_raw.shape[0]):
for j in range(img_raw.shape[1]):
if (color_pixel(i,j)!= "green"):
extrapolate_img[i,j,1] = extrapolate_green_pixel(img_raw,i,j,orientation_matrix[i,j])
else:
extrapolate_img[i,j,1] = img_raw[i,j]
return extrapolate_img
## Red and Blue Channels ##
def extrapolate_top(img_raw,img_extrapolate,i,j):
return (img_raw[i-1,j] + img_raw[i,j]-img_extrapolate[i-1,j,1])
def extrapolate_left(img_raw,img_extrapolate,i,j):
return (img_raw[i,j-1] + img_raw[i,j]-img_extrapolate[i,j-1,1])
def extrapolate_right(img_raw,img_extrapolate,i,j):
return (img_raw[i,j+1] + img_raw[i,j]-img_extrapolate[i,j+1,1])
def extrapolate_bottom(img_raw,img_extrapolate,i,j):
return (img_raw[i+1,j] + img_raw[i,j]-img_extrapolate[i+1,j,1])
def extrapolate_top_left(img_raw,img_extrapolate,i,j):
return (img_raw[i-1,j-1] + img_extrapolate[i,j,1]-img_extrapolate[i-1,j-1,1])
def extrapolate_top_right(img_raw,img_extrapolate,i,j):
return (img_raw[i-1,j+1] + img_extrapolate[i,j,1]-img_extrapolate[i-1,j+1,1])
def extrapolate_bottom_left(img_raw,img_extrapolate,i,j):
return (img_raw[i+1,j-1] + img_extrapolate[i,j,1]-img_extrapolate[i+1,j-1,1])
def extrapolate_bottom_right(img_raw,img_extrapolate,i,j):
return (img_raw[i+1,j+1] + img_extrapolate[i,j,1]-img_extrapolate[i+1,j+1,1])
def median_pixel(img_raw,img_extrapolate,i,j,orientations_to_drop):
list_extrapolate = []
if (color_pixel(i,j) != "green"):
if("top_left" not in orientations_to_drop):
list_extrapolate.append(extrapolate_top_left(img_raw,img_extrapolate,i,j))
if("top_right" not in orientations_to_drop):
list_extrapolate.append(extrapolate_top_right(img_raw,img_extrapolate,i,j))
if("bottom_left" not in orientations_to_drop):
list_extrapolate.append(extrapolate_bottom_left(img_raw,img_extrapolate,i,j))
if("bottom_right" not in orientations_to_drop):
list_extrapolate.append(extrapolate_bottom_right(img_raw,img_extrapolate,i,j))
elif (color_pixel(i,j) == "green"):
if("top" not in orientations_to_drop):
list_extrapolate.append(extrapolate_top(img_raw,img_extrapolate,i,j))
if("left" not in orientations_to_drop):
list_extrapolate.append(extrapolate_left(img_raw,img_extrapolate,i,j))
if("right" not in orientations_to_drop):
list_extrapolate.append(extrapolate_right(img_raw,img_extrapolate,i,j))
if("bottom" not in orientations_to_drop):
list_extrapolate.append(extrapolate_bottom(img_raw,img_extrapolate,i,j))
return np.median(list_extrapolate)
def extrapolate_pixel(img_raw,img_extrapolate,i,j,color):
orientations_to_drop = []
if (color_pixel(i,j)!='green'):
if (i<1):
orientations_to_drop.append("top_left")
orientations_to_drop.append("top_right")
if (i>img_raw.shape[0]-2):
orientations_to_drop.append("bottom_left")
orientations_to_drop.append("bottom_right")
if (j<1):
orientations_to_drop.append("top_left")
orientations_to_drop.append("bottom_left")
if (j>img_raw.shape[1]-2):
orientations_to_drop.append("top_right")
orientations_to_drop.append("bottom_right")
if ("top_left" not in orientations_to_drop and "top_right" not in orientations_to_drop and "bottom_left" not in orientations_to_drop and "bottom_right" not in orientations_to_drop):
rmse_top_left = rmse_pixel(img_raw[i,j],extrapolate_top_left(img_raw,img_extrapolate,i,j))
rmse_top_right = rmse_pixel(img_raw[i,j],extrapolate_top_right(img_raw,img_extrapolate,i,j))
rmse_bottom_left = rmse_pixel(img_raw[i,j],extrapolate_bottom_left(img_raw,img_extrapolate,i,j))
rmse_bottom_right = rmse_pixel(img_raw[i,j],extrapolate_bottom_right(img_raw,img_extrapolate,i,j))
if (rmse_bottom_left> rmse_bottom_right and rmse_bottom_left> rmse_top_left and rmse_bottom_left> rmse_top_right):
orientations_to_drop.append("bottom_left")
elif (rmse_bottom_right> rmse_bottom_left and rmse_bottom_right> rmse_top_left and rmse_bottom_right> rmse_top_right):
orientations_to_drop.append("bottom_right")
elif (rmse_top_left> rmse_bottom_left and rmse_top_left> rmse_bottom_right and rmse_top_left> rmse_top_right):
orientations_to_drop.append("top_left")
else:
orientations_to_drop.append("top_right")
elif(color_pixel(i,j)=="green"):
if (i<1):
orientations_to_drop.append("top")
if (i>img_raw.shape[0]-2):
orientations_to_drop.append("bottom")
if (j<1):
orientations_to_drop.append("left")
if (j>img_raw.shape[1]-2):
orientations_to_drop.append("right")
if ((i%2!=0 and color == "red") or (i%2==0 and color == "blue")):
if ("right" not in orientations_to_drop and "left" not in orientations_to_drop):
rmse_pixel_left = rmse_pixel(img_raw[i,j],extrapolate_left(img_raw,img_extrapolate,i,j))
rmse_pixel_right = rmse_pixel(img_raw[i,j],extrapolate_right(img_raw,img_extrapolate,i,j))
if (rmse_pixel_left > rmse_pixel_right):
orientations_to_drop.append('left')
else:
orientations_to_drop.append('right')
else:
if ("top" not in orientations_to_drop and "bottom" not in orientations_to_drop):
rmse_pixel_top = rmse_pixel(img_raw[i,j],extrapolate_top(img_raw,img_extrapolate,i,j))
rmse_pixel_bottom = rmse_pixel(img_raw[i,j],extrapolate_bottom(img_raw,img_extrapolate,i,j))
if (rmse_pixel_top > rmse_pixel_bottom):
orientations_to_drop.append('top')
else:
orientations_to_drop.append('bottom')
return median_pixel(img_raw,img_extrapolate,i,j,orientations_to_drop)
def extrapolate_red(img_raw,img_extrapolate):
for i in range(img_raw.shape[0]):
for j in range(img_raw.shape[1]):
if (color_pixel(i,j)!="red"):
img_extrapolate[i,j,0] = extrapolate_pixel(img_raw,img_extrapolate,i,j,"red")
else:
img_extrapolate[i,j,0] = img_raw[i,j]
def extrapolate_blue(img_raw,img_extrapolate):
for i in range(img_raw.shape[0]):
for j in range(img_raw.shape[1]):
if (color_pixel(i,j)!="blue"):
img_extrapolate[i,j,2] = extrapolate_pixel(img_raw,img_extrapolate,i,j,"blue")
else:
img_extrapolate[i,j,2] = img_raw[i,j]
def extrapolate_img(img_cfa):
extapolate_img = np.zeros(img_cfa.shape + (3,))
extrapolate_green(img_cfa,extapolate_img)
extrapolate_red(img_cfa,extapolate_img)
extrapolate_blue(img_cfa,extapolate_img)
return extapolate_img
#################################################
##QUAD BAYER
#################################################
## Green Channel ##
### Formulas extrapolation of pixels:
def extrapolate_green_top_quad(img_raw,i,j):
extrapolate_quad = np.zeros((2,2))
for m in range(2):
for n in range(2):
extrapolate_quad[m,n] = img_raw[i+m-1*2,j+n] + 3/4*(img_raw[i+m,j+n]-img_raw[i+m-2*2,j+n])-1/4*(img_raw[i+m-1*2,j+n]-img_raw[i+m-3*2,j+n])
return extrapolate_quad
def extrapolate_green_bottom_quad(img_raw,i,j):
extrapolate_quad = np.zeros((2,2))
for m in range(2):
for n in range(2):
extrapolate_quad[m,n] = img_raw[i+m+1*2,j+n] + 3/4*(img_raw[i+m,j+n]-img_raw[i+m+2*2,j+n])-1/4*(img_raw[i+m+1*2,j+n]-img_raw[i+m+3*2,j+n])
return extrapolate_quad
def extrapolate_green_left_quad(img_raw,i,j):
extrapolate_quad = np.zeros((2,2))
for m in range(2):
for n in range(2):
extrapolate_quad[m,n] = img_raw[i+m,j+n-1*2] + 3/4*(img_raw[i+m,j+n]-img_raw[i+m,j+n-2*2])-1/4*(img_raw[i+m,j+n-1*2]-img_raw[i+m,j+n-3*2])
return extrapolate_quad
def extrapolate_green_right_quad(img_raw,i,j):
extrapolate_quad = np.zeros((2,2))
for m in range(2):
for n in range(2):
extrapolate_quad[m,n] = img_raw[i+m,j+n+1*2] + 3/4*(img_raw[i+m,j+n]-img_raw[i+m,j+n+2*2])-1/4*(img_raw[i+m,j+n+1*2]-img_raw[i+m,j+n+3*2])
return extrapolate_quad
### Extrapolation method:
def median_extrapolate_green_pixel_quad(img_raw,i,j,orientations_to_drop):
list_extrapolate_pixel = []
if ("top" not in orientations_to_drop):
list_extrapolate_pixel.append(extrapolate_green_top_quad(img_raw,i,j))
if ("bottom" not in orientations_to_drop):
list_extrapolate_pixel.append(extrapolate_green_bottom_quad(img_raw,i,j))
if("left" not in orientations_to_drop):
list_extrapolate_pixel.append(extrapolate_green_left_quad(img_raw,i,j))
if("right" not in orientations_to_drop):
list_extrapolate_pixel.append(extrapolate_green_right_quad(img_raw,i,j))
median_quad = np.zeros((2,2))
for m in range(2):
for n in range(2):
median_quad[m,n] = np.median([list_extrapolate_pixel[k] for k in range(len(list_extrapolate_pixel))])
return median_quad
def extrapolate_green_pixel_quad(img_raw,i,j,orientation):
# First the borders:
orientations_to_drop = []
if (i<2):
orientations_to_drop.append('top')
if (i>img_raw.shape[0]-4*2):
orientations_to_drop.append('bottom')
if (j<2):
orientations_to_drop.append('left')
if (j>img_raw.shape[1]-4*2):
orientations_to_drop.append('right')
# Then the rest of the image:
else:
if (orientation >0.5): # V < H so we gonna eliminate one horizontal pixel.
if ("right" not in orientations_to_drop and "left" not in orientations_to_drop):
rmse_pixel_left = rmse_pixel(img_raw[i:i+2,j:j+2],extrapolate_green_left_quad(img_raw,i,j))
rmse_pixel_right = rmse_pixel(img_raw[i:i+2,j:j+2],extrapolate_green_right_quad(img_raw,i,j))
if (np.sum(rmse_pixel_left) > np.sum(rmse_pixel_right)):
orientations_to_drop.append('left')
else:
orientations_to_drop.append('right')
else: # V > H so we gonna eliminate one vertical pixel.
if ("top" not in orientations_to_drop and "bottom" not in orientations_to_drop):
rmse_pixel_top = rmse_pixel(img_raw[i+2,j+2],extrapolate_green_top_quad(img_raw,i,j))
rmse_pixel_bottom = rmse_pixel(img_raw[i+2,j+2],extrapolate_green_bottom_quad(img_raw,i,j))
if (np.sum(rmse_pixel_top) > np.sum(rmse_pixel_bottom)):
orientations_to_drop.append('top')
else:
orientations_to_drop.append('bottom')
return median_extrapolate_green_pixel_quad(img_raw,i,j,orientations_to_drop)
def extrapolate_green_quad(img_raw,extrapolate_img):
orientation_matrix = compute_orientation_matrix(img_raw)
for i in range(0,img_raw.shape[0],2):
for j in range(0,img_raw.shape[1],2):
if (color_pixel(i,j,'quad_bayer')!= "green"):
extrapolate_img[i:i+2,j:j+2,1] = extrapolate_green_pixel_quad(img_raw,i,j,(1/4) *np.sum(orientation_matrix[i:i+2,j:j+2]))
else:
extrapolate_img[i:i+2,j:j+2,1] = img_raw[i:i+2,j:j+2]
return extrapolate_img
## Red and Blue Channels ##
def extrapolate_top_quad(img_raw,img_extrapolate,i,j):
extrapolate_quad = np.zeros((2,2))
for m in range(2):
for n in range(2):
extrapolate_quad[m,n] = (img_raw[i+m-1*2,j+n] + img_raw[i+m,j+n]-img_extrapolate[i+m-1*2,j+n,1])
return extrapolate_quad
def extrapolate_left_quad(img_raw,img_extrapolate,i,j):
extrapolate_quad = np.zeros((2,2))
for m in range(2):
for n in range(2):
extrapolate_quad[m,n] = (img_raw[i+m,j+n-1*2] + img_raw[i+m,j+n]-img_extrapolate[i+m,j+n-1*2,1])
return extrapolate_quad
def extrapolate_right_quad(img_raw,img_extrapolate,i,j):
extrapolate_quad = np.zeros((2,2))
for m in range(2):
for n in range(2):
extrapolate_quad[m,n] = (img_raw[i+m,j+n+1*2] + img_raw[i+m,j+n]-img_extrapolate[i+m,j+n+1*2,1])
return extrapolate_quad
def extrapolate_bottom_quad(img_raw,img_extrapolate,i,j):
extrapolate_quad = np.zeros((2,2))
for m in range(2):
for n in range(2):
extrapolate_quad[m,n] = (img_raw[i+m+1*2,j+n] + img_raw[i+m,j+n]-img_extrapolate[i+m+1*2,j+n,1])
return extrapolate_quad
def extrapolate_top_left_quad(img_raw,img_extrapolate,i,j):
extrapolate_quad = np.zeros((2,2))
for m in range(2):
for n in range(2):
extrapolate_quad[m,n] =(img_raw[i+m-1*2,j+n-1*2] + img_extrapolate[i+m,j+n,1]-img_extrapolate[i+m-1*2,j+n-1*2,1])
return extrapolate_quad
def extrapolate_top_right_quad(img_raw,img_extrapolate,i,j):
extrapolate_quad = np.zeros((2,2))
for m in range(2):
for n in range(2):
extrapolate_quad[m,n] =(img_raw[i+m-1*2,j+n+1*2] + img_extrapolate[i+m,j+n,1]-img_extrapolate[i+m-1*2,j+n+1*2,1])
return extrapolate_quad
def extrapolate_bottom_left_quad(img_raw,img_extrapolate,i,j):
extrapolate_quad = np.zeros((2,2))
for m in range(2):
for n in range(2):
extrapolate_quad[m,n] =(img_raw[i+m+1*2,j+n-1*2] + img_extrapolate[i+m,j+n,1]-img_extrapolate[i+m+1*2,j+n-1*2,1])
return extrapolate_quad
def extrapolate_bottom_right_quad(img_raw,img_extrapolate,i,j):
extrapolate_quad = np.zeros((2,2))
for m in range(2):
for n in range(2):
extrapolate_quad[m,n] =(img_raw[i+m+1*2,j+n+1*2] + img_extrapolate[i+m,j+n,1]-img_extrapolate[i+m+1*2,j+n+1*2,1])
return extrapolate_quad
def median_pixel_quad(img_raw,img_extrapolate,i,j,orientations_to_drop):
list_extrapolate = []
if (color_pixel(i,j,"quad_bayer") != "green"):
if("top_left" not in orientations_to_drop):
list_extrapolate.append(extrapolate_top_left_quad(img_raw,img_extrapolate,i,j))
if("top_right" not in orientations_to_drop):
list_extrapolate.append(extrapolate_top_right_quad(img_raw,img_extrapolate,i,j))
if("bottom_left" not in orientations_to_drop):
list_extrapolate.append(extrapolate_bottom_left_quad(img_raw,img_extrapolate,i,j))
if("bottom_right" not in orientations_to_drop):
list_extrapolate.append(extrapolate_bottom_right_quad(img_raw,img_extrapolate,i,j))
elif (color_pixel(i,j,"quad_bayer") == "green"):
if("top" not in orientations_to_drop):
list_extrapolate.append(extrapolate_top_quad(img_raw,img_extrapolate,i,j))
if("left" not in orientations_to_drop):
list_extrapolate.append(extrapolate_left_quad(img_raw,img_extrapolate,i,j))
if("right" not in orientations_to_drop):
list_extrapolate.append(extrapolate_right_quad(img_raw,img_extrapolate,i,j))
if("bottom" not in orientations_to_drop):
list_extrapolate.append(extrapolate_bottom_quad(img_raw,img_extrapolate,i,j))
median_quad = np.zeros((2,2))
for m in range(2):
for n in range(2):
median_quad[m,n] = np.median([list_extrapolate[k][m,n] for k in range(len(list_extrapolate))])
return median_quad
def extrapolate_pixel_quad(img_raw,img_extrapolate,i,j,color):
orientations_to_drop = []
if (color_pixel(i,j,"quad_bayer")!='green'):
if (i<1):
orientations_to_drop.append("top_left")
orientations_to_drop.append("top_right")
if (i>img_raw.shape[0]-2*2):
orientations_to_drop.append("bottom_left")
orientations_to_drop.append("bottom_right")
if (j<1):
orientations_to_drop.append("top_left")
orientations_to_drop.append("bottom_left")
if (j>img_raw.shape[1]-2*2):
orientations_to_drop.append("top_right")
orientations_to_drop.append("bottom_right")
if ("top_left" not in orientations_to_drop and "top_right" not in orientations_to_drop and "bottom_left" not in orientations_to_drop and "bottom_right" not in orientations_to_drop):
rmse_top_left = rmse_pixel(img_raw[i:i+2,j:j+2],extrapolate_top_left_quad(img_raw,img_extrapolate,i,j))
rmse_top_right = rmse_pixel(img_raw[i:i+2,j:j+2],extrapolate_top_right_quad(img_raw,img_extrapolate,i,j))
rmse_bottom_left = rmse_pixel(img_raw[i:i+2,j:j+2],extrapolate_bottom_left_quad(img_raw,img_extrapolate,i,j))
rmse_bottom_right = rmse_pixel(img_raw[i:i+2,j:j+2],extrapolate_bottom_right_quad(img_raw,img_extrapolate,i,j))
if (rmse_bottom_left> rmse_bottom_right and rmse_bottom_left> rmse_top_left and rmse_bottom_left> rmse_top_right):
orientations_to_drop.append("bottom_left")
elif (rmse_bottom_right> rmse_bottom_left and rmse_bottom_right> rmse_top_left and rmse_bottom_right> rmse_top_right):
orientations_to_drop.append("bottom_right")
elif (rmse_top_left> rmse_bottom_left and rmse_top_left> rmse_bottom_right and rmse_top_left> rmse_top_right):
orientations_to_drop.append("top_left")
else:
orientations_to_drop.append("top_right")
elif(color_pixel(i,j,"quad_bayer")=="green"):
if (i<1):
orientations_to_drop.append("top")
if (i>img_raw.shape[0]-2*2):
orientations_to_drop.append("bottom")
if (j<1):
orientations_to_drop.append("left")
if (j>img_raw.shape[1]-2*2):
orientations_to_drop.append("right")
if (((i/2)%2!=0 and color == "red") or ((i/2)%2==0 and color == "blue")):
if ("right" not in orientations_to_drop and "left" not in orientations_to_drop):
rmse_pixel_left = rmse_pixel(img_raw[i:i+2,j:j+2],extrapolate_left_quad(img_raw,img_extrapolate,i,j))
rmse_pixel_right = rmse_pixel(img_raw[i:i+2,j:j+2],extrapolate_right_quad(img_raw,img_extrapolate,i,j))
if (rmse_pixel_left > rmse_pixel_right):
orientations_to_drop.append('left')
else:
orientations_to_drop.append('right')
else:
if ("top" not in orientations_to_drop and "bottom" not in orientations_to_drop):
rmse_pixel_top = rmse_pixel(img_raw[i:i+2,j:j+2],extrapolate_top_quad(img_raw,img_extrapolate,i,j))
rmse_pixel_bottom = rmse_pixel(img_raw[i:i+2,j:j+2],extrapolate_bottom_quad(img_raw,img_extrapolate,i,j))
if (rmse_pixel_top > rmse_pixel_bottom):
orientations_to_drop.append('top')
else:
orientations_to_drop.append('bottom')
return median_pixel_quad(img_raw,img_extrapolate,i,j,orientations_to_drop)
def extrapolate_red_quad(img_raw,img_extrapolate):
for i in range(0,img_raw.shape[0],2):
for j in range(0,img_raw.shape[1],2):
if (color_pixel(i,j,"quad_bayer")!="red"):
img_extrapolate[i:i+2,j:j+2,0] = extrapolate_pixel_quad(img_raw,img_extrapolate,i,j,"red")
else:
img_extrapolate[i:i+2,j:j+2,0] = img_raw[i:i+2,j:j+2]
def extrapolate_blue_quad(img_raw,img_extrapolate):
for i in range(0,img_raw.shape[0],2):
for j in range(0,img_raw.shape[1],2):
if (color_pixel(i,j,"quad_bayer")!="blue"):
img_extrapolate[i:i+2,j:j+2,2] = extrapolate_pixel_quad(img_raw,img_extrapolate,i,j,"blue")
else:
img_extrapolate[i:i+2,j:j+2,2] = img_raw[i:i+2,j:j+2]
def extrapolate_img_quad(img_cfa):
extapolate_img = np.zeros(img_cfa.shape + (3,))
extrapolate_green_quad(img_cfa,extapolate_img)
extrapolate_red_quad(img_cfa,extapolate_img)
extrapolate_blue_quad(img_cfa,extapolate_img)
return extapolate_img
def extrapolate_cfa(img_cfa,cfa):
if (cfa=="bayer"):
return extrapolate_img(img_cfa)
elif(cfa=="quad_bayer"):
return extrapolate_img_quad(img_cfa)
else:
print("Error: cfa not recognized")
return None
\ No newline at end of file
source diff could not be displayed: it is too large. Options to address this: view the blob.
"""The main file for the reconstruction.
This file should NOT be modified except the body of the 'run_reconstruction' function.
Students can call their functions (declared in others files of src/methods/your_name).
"""
import numpy as np
import functions as fu
from src.forward_model import CFA
import importlib
importlib.reload(fu)
def run_reconstruction(y: np.ndarray, cfa: str) -> np.ndarray:
"""Performs demosaicking on y.
Args:
y (np.ndarray): Mosaicked image to be reconstructed.
cfa (str): Name of the CFA. Can be bayer or quad_bayer.
Returns:
np.ndarray: Demosaicked image.
"""
# Performing the reconstruction.
# TODO
return fu.extrapolate_cfa(y,cfa)
File added
"""The main file for the reconstruction.
This file should NOT be modified except the body of the 'run_reconstruction' function.
Students can call their functions (declared in others files of src/methods/your_name).
"""
import numpy as np
from src.forward_model import CFA
from src.methods.Chardon_tom.utils import *
import pywt
#!!!!!!!! It is normal that the reconstructions lasts several minutes (3min on my computer)
def run_reconstruction(y: np.ndarray, cfa: str) -> np.ndarray:
"""Performs demosaicking on y.
Args:
y (np.ndarray): Mosaicked image to be reconstructed.
cfa (str): Name of the CFA. Can be bayer or quad_bayer.
Returns:
np.ndarray: Demosaicked image.
"""
# Define constants and operators
cfa_name = 'bayer' # bayer or quad_bayer
input_shape = (y.shape[0], y.shape[1], 3)
op = CFA(cfa_name, input_shape)
res = op.adjoint(y)
N,M = input_shape[0], input_shape[1]
#interpolating green channel
for i in range (N):
for j in range (M):
if res[i,j,1] ==0:
neighbors = get_neighbors(res,1,i,j,N,M)
weights = get_weights(res,i,j,1,N,M)
res[i,j,1] = interpolate_green(weights, neighbors)
#first intepolation of red channel
for i in range (1,N,2):
for j in range (0,M,2):
neighbors = get_neighbors(res,0,i,j,N,M)
neighbors_G = get_neighbors(res,1,i,j,N,M)
weights = get_weights(res,i,j,0,N,M)
res[i,j,0] = interpolate_red_blue(weights,neighbors, neighbors_G)
# second interpolation of red channel
for i in range (N):
for j in range (M):
if res[i,j,0] ==0:
neighbors = get_neighbors(res,0,i,j,N,M)
weights = get_weights(res,i,j,0,N,M)
res[i,j,0] = interpolate_green(weights, neighbors)
#first interpolation of blue channel
for i in range (0,N,2):
for j in range (1,M,2):
neighbors = get_neighbors(res,2,i,j,N,M)
neighbors_G = get_neighbors(res,1,i,j,N,M)
weights = get_weights(res,i,j,2,N,M)
res[i,j,2] = interpolate_red_blue(weights, neighbors, neighbors_G)
#second interpolation of blue channel
for i in range (N):
for j in range (M):
if res[i,j,2] ==0:
neighbors = get_neighbors(res,2,i,j,N,M)
weights = get_weights(res,i,j,2,N,M)
res[i,j,2] = interpolate_green(weights,neighbors)
# k=0
# while k<2 :
# for i in range(input_shape[0]):
# for j in range(input_shape[1]):
# res[i][j][1] = correction_green(res,i,j,N,M)
# for i in range(input_shape[0]):
# for j in range(input_shape[1]):
# res[i][j][0] = correction_red(res,i,j,N,M)
# for i in range(input_shape[0]):
# for j in range(input_shape[1]):
# res[i][j][2] = correction_blue(res,i,j,N,M)
# k+=1
res[res>1] = 1
res[res<0] = 0
return res
import numpy as np
import pywt
def get_neighbors (img,channel,i,j,N,M):
P1 = img[(i-1)%N,(j-1)%M,channel]
P2 = img[(i-1)%N,j%M,channel]
P3 = img[(i-1)%N,(j+1)%M,channel]
P4 = img[i%N,(j-1)%M,channel]
P5 = img[i%N,j%M,channel]
P6 = img[i%N,(j+1)%M,channel]
P7 = img[(i+1)%N,(j-1)%M,channel]
P8 = img[(i+1)%N,j%M,channel]
P9 = img[(i+1)%N,(j+1)%M,channel]
return np.array([P1,P2,P3,P4,P5,P6,P7,P8,P9])
def get_derivatives(neighbors):
[P1, P2, P3, P4, P5, P6, P7, P8, P9] = neighbors
D_x = (P4 - P6)/2
D_y = (P2 - P8)/2
D_xd = (P3 - P7)/(2*np.sqrt(2))
D_yd = (P1 - P9)/(2*np.sqrt(2))
return ([D_x, D_y, D_xd, D_yd])
def get_weights(mosaic_image, i, j, channel, N, M):
derivatives_neigbors = []
for l in range(-1, 2):
for L in range(-1, 2):
derivatives_neigbors.append(get_derivatives(
get_neighbors(mosaic_image, channel, i+l, j+L, N, M)))
[Dx, Dy, Dxd, Dyd] = derivatives_neigbors[4]
E1 = 1/np.sqrt(1 + Dyd**2 + derivatives_neigbors[0][3]**2)
E2 = 1/np.sqrt(1 + Dy**2 + derivatives_neigbors[1][1]**2)
E3 = 1/np.sqrt(1 + Dxd**2 + derivatives_neigbors[2][2]**2)
E4 = 1/np.sqrt(1 + Dx**2 + derivatives_neigbors[3][0]**2)
E6 = 1/np.sqrt(1 + Dxd**2 + derivatives_neigbors[5][2]**2)
E7 = 1/np.sqrt(1 + Dy**2 + derivatives_neigbors[6][1]**2)
E8 = 1/np.sqrt(1 + Dyd**2 + derivatives_neigbors[7][3]**2)
E9 = 1/np.sqrt(1 + Dx**2 + derivatives_neigbors[8][0]**2)
E = [E1, E2, E3, E4, E6, E7, E8, E9]
return E
def interpolate_green(weights, neighbors):
[E1, E2, E3, E4, E6, E7, E8, E9] = weights
[P1, P2, P3, P4, P5, P6, P7, P8, P9] = neighbors
I5 = (E2*P2 + E4*P4 + E6*P6 + E8*P8)/(E2 + E4 + E6 + E8)
return (I5)
def interpolate_red_blue(weights, neighbors, green_neighbors):
[E1, E2, E3, E4, E6, E7, E8, E9] = weights
[P1, P2, P3, P4, P5, P6, P7, P8, P9] = neighbors
[G1, G2, G3, G4, G5, G6, G7, G8, G9] = green_neighbors
I5 = G5*(E1*P1/G1 + E3*P3/G3 + E7*P7/G7 + E9*P9/G9)/(E1 + E3 + E7 + E9)
return (I5)
def correction_green(res,i,j,N,M):
[G1,G2,G3,G4,G5,G6,G7,G8,G9] = get_neighbors(res,1,i,j,N,M)
[R1,R2,R3,R4,R5,R6,R7,R8,R9] = get_neighbors(res,0,i,j,N,M)
[B1,B2,B3,B4,B5,B6,B7,B8,B9] = get_neighbors(res,2,i,j,N,M)
[E1,E2,E3,E4,E6,E7,E8,E9] = get_weights(res,i,j,1,N,M)
Gb5 = R5*((E2*G2)/B2 + (E4*G4)/B4 + (E6*G6)/B6 + (E8*G8)/B8)/(E2 + E4 + E6 + E8)
Gr5 = B5*((E2*G2)/R2 + (E4*G4)/R4 + (E6*G6)/R6 + (E8*G8)/R8)/(E2 + E4 + E6 + E8)
G5 = (Gb5 + Gr5)/2
return G5
def correction_red(res,i,j,N,M) :
[G1,G2,G3,G4,G5,G6,G7,G8,G9] = get_neighbors(res,1,i,j,N,M)
[R1,R2,R3,R4,R5,R6,R7,R8,R9] = get_neighbors(res,0,i,j,N,M)
[E1,E2,E3,E4,E6,E7,E8,E9] = get_weights(res,i,j,0,N,M)
R5 = G5*((E1*R1)/G1 + (E2*R2)/G2 + (E3*R3)/G3 + (E4*R4)/G4 + (E6*R6)/G6 + (E7*R7)/G7 + (E8*R8)/G8 + (E9*R9)/G9)/(E1 + E2 + E3 + E4 + E6 + E7 + E8 + E9)
return R5
def correction_blue(res,i,j,N,M) :
[G1,G2,G3,G4,G5,G6,G7,G8,G9] = get_neighbors(res,1,i,j,N,M)
[B1,B2,B3,B4,B5,B6,B7,B8,B9] = get_neighbors(res,2,i,j,N,M)
[E1,E2,E3,E4,E6,E7,E8,E9] = get_weights(res,i,j,2,N,M)
B5 = G5*((E1*B1)/G1 + (E2*B2)/G2 + (E3*B3)/G3 + (E4*B4)/G4 + (E6*B6)/G6 + (E7*B7)/G7 + (E8*B8)/G8 + (E9*B9)/G9)/(E1 + E2 + E3 + E4 + E6 + E7 + E8 + E9)
return B5
File added
import numpy as np
def find_Knearest_neighbors(z, chan, i, j, N, M):
"""Finds a pixel's neighbors on a channel"""
return np.array([z[(i+di)%N, (j+dj)%M, chan] for di in range(-1, 2) for dj in range(-1, 2)])
def calculate_directional_gradients(neighbors):
"""Gives the directional derivative"""
P1, P2, P3, P4, P5, P6, P7, P8, P9 = neighbors
Dx, Dy = (P4 - P6)/2, (P2 - P8)/2
Dxd, Dyd = (P3 - P7)/(2*np.sqrt(2)), (P1 - P9)/(2*np.sqrt(2))
return [Dx, Dy, Dxd, Dyd]
def calculate_adaptive_weights(z, neigh, dir_deriv,chan,i,j,N,M):
[Dx,Dy,Dxd,Dyd] = dir_deriv
[P1,P2,P3,P4,P5,P6,P7,P8,P9] = neigh
E = []
c = 1
for k in range (-1,2):
for k in range (-1,2):
n = find_Knearest_neighbors(z,chan,i+k,j+k,N,M)
dd = calculate_directional_gradients(n)
if c == 1 or c == 9:
E.append(1/np.sqrt(1 + Dyd**2 + dd[3]**2))
elif c == 2 or c == 8:
E.append(1/np.sqrt(1 + Dy**2 + dd[1]**2))
elif c == 3 or c == 7:
E.append(1/np.sqrt(1 + Dxd**2 + dd[2]**2))
elif c == 4 or c == 6:
E.append(1/np.sqrt(1 + Dx**2 + dd[0]**2))
c += 1
return E
def interpolate_pixel(neigh,weights):
"""This function performs interpolation for a single pixel by calculating a weighted average of its neighboring pixels"""
[P1,P2,P3,P4,P5,P6,P7,P8,P9] = neigh
[E1,E2,E3,E4,E6,E7,E8,E9] = weights
num5 = E2*P2 + E4*P4 + E6*P6 + E8*P8
den5 = E2 + E4 + E6 + E8
I5 = num5/den5
return I5
def interpolate_RedBlue(neighbors, neighbors_G, weights):
"""This function specifically interpolates a pixel in the red or blue channels"""
[P1,P2,P3,P4,P5,P6,P7,P8,P9] = neighbors
[G1,G2,G3,G4,G5,G6,G7,G8,G9] = neighbors_G
[E1,E2,E3,E4,E6,E7,E8,E9] = weights
num5 = ((E1*P1)/G1) + ((E3*P3)/G3) + ((E7*P7)/G7) + ((E9*P9)/G9)
den5 = E1 + E3 + E7 + E9
I5 = G5 * num5/den5
return I5
source diff could not be displayed: it is too large. Options to address this: view the blob.
import numpy as np
from src.forward_model import CFA
from src.methods.ELAMRANI_Mouna.functions import *
def run_reconstruction(y: np.ndarray, cfa: str) -> np.ndarray:
cfa_name = 'bayer' # bayer or quad_bayer
input_shape = (y.shape[0], y.shape[1], 3)
op = CFA(cfa_name, input_shape)
img_res = op.adjoint(y)
N = img_res[:,:,0].shape[0]
M = img_res[:,:,0].shape[1]
def interpolate_channel(img_res, channel, first_pass, N, M):
for i in range(N):
for j in range(M):
if first_pass and ((channel == 0 and i % 2 == 1 and j % 2 == 0) or
(channel == 2 and i % 2 == 0 and j % 2 == 1)):
neighbors = find_Knearest_neighbors(img_res, channel, i, j, N, M)
neighbors_G = find_Knearest_neighbors(img_res, 1, i, j, N, M)
dir_deriv = calculate_directional_gradients(neighbors_G)
weights = calculate_adaptive_weights(img_res, neighbors_G, dir_deriv, 1, i, j, N, M)
img_res[i, j, channel] = interpolate_RedBlue(neighbors, neighbors_G, weights)
elif not first_pass and img_res[i, j, channel] == 0:
neighbors = find_Knearest_neighbors(img_res, channel, i, j, N, M)
dir_deriv = calculate_directional_gradients(neighbors)
weights = calculate_adaptive_weights(img_res, neighbors, dir_deriv, channel, i, j, N, M)
img_res[i, j, channel] = interpolate_pixel(neighbors, weights)
return img_res
# Interpolation pour chaque canal
img_res = interpolate_channel(img_res, 1, False, N, M) # Interpolation du canal vert
img_res = interpolate_channel(img_res, 0, True, N, M) # Première interpolation du canal rouge
img_res = interpolate_channel(img_res, 0, False, N, M) # Seconde interpolation du canal rouge
img_res = interpolate_channel(img_res, 2, True, N, M) # Première interpolation du canal bleu
img_res = interpolate_channel(img_res, 2, False, N, M) # Seconde interpolation du canal bleu
img_res[img_res > 1] = 1
img_res[img_res < 0] = 0
return img_res
File added
import numpy as np
from scipy.signal import correlate2d
from src.forward_model import CFA
def malvar_he_cutler(y: np.ndarray, op: CFA ) -> np.ndarray:
"""Performs demosaicing using the malvar-he-cutler algorithm
Args:
op (CFA): CFA operator.
y (np.ndarray): Mosaicked image.
Returns:
np.ndarray: Demosaicked image.
"""
red_mask, green_mask, blue_mask = [op.mask[:, :, 0], op.mask[:, :, 1], op.mask[:, :, 2]]
mosaicked_image = np.float32(y)
demosaicked_image = np.empty(op.input_shape)
if op.cfa == 'quad_bayer':
filters = get_quad_bayer_filters()
else:
filters = get_default_filters()
demosaicked_image = apply_demosaicking_filters(
mosaicked_image,demosaicked_image, red_mask, green_mask, blue_mask, filters
)
return demosaicked_image
def get_quad_bayer_filters():
coefficient_scale = 0.03125
return {
"G_at_R_and_B": np.array([
[0, 0, 0, 0, -1, -1, 0, 0, 0, 0],
[0, 0, 0, 0, -1, -1, 0, 0, 0, 0],
[0, 0, 0, 0, 2, 2, 0, 0, 0, 0],
[0, 0, 0, 0, 2, 2, 0, 0, 0, 0],
[-1, -1, 2, 2, 4, 4, 2, 2, -1, -1],
[-1, -1, 2, 2, 4, 4, 2, 2, -1, -1],
[0, 0, 0, 0, 2, 2, 0, 0, 0, 0],
[0, 0, 0, 0, 2, 2, 0, 0, 0, 0],
[0, 0, 0, 0, -1, -1, 0, 0, 0, 0],
[0, 0, 0, 0, -1, -1, 0, 0, 0, 0]
]) * coefficient_scale,
"R_at_GR_and_B_at_GB": np.array([
[0, 0, 0, 0, 0.5, 0.5, 0, 0, 0, 0],
[0, 0, 0, 0, 0.5, 0.5, 0, 0, 0, 0],
[0, 0, -1, -1, 0, 0, -1, -1, 0, 0],
[0, 0, -1, -1, 0, 0, -1, -1, 0, 0],
[-1, -1, 4, 4, 5, 5, 4, 4, -1, -1],
[-1, -1, 4, 4, 5, 5, 4, 4, -1, -1],
[0, 0, -1, -1, 0, 0, -1, -1, 0, 0],
[0, 0, -1, -1, 0, 0, -1, -1, 0, 0],
[0, 0, 0, 0, 0.5, 0.5, 0, 0, 0, 0],
[0, 0, 0, 0, 0.5, 0.5, 0, 0, 0, 0]
]) * coefficient_scale,
"R_at_GB_and_B_at_GR": np.array([
[0, 0, 0, 0, -1, -1, 0, 0, 0, 0],
[0, 0, 0, 0, -1, -1, 0, 0, 0, 0],
[0, 0, -1, -1, 4, 4, -1, -1, 0, 0],
[0, 0, -1, -1, 4, 4, -1, -1, 0, 0],
[0.5, 0.5, 0, 0, 5, 5, 0, 0, 0.5, 0.5],
[0.5, 0.5, 0, 0, 5, 5, 0, 0, 0.5, 0.5],
[0, 0, -1, -1, 4, 4, -1, -1, 0, 0],
[0, 0, -1, -1, 4, 4, -1, -1, 0, 0],
[0, 0, 0, 0, -1, -1, 0, 0, 0, 0],
[0, 0, 0, 0, -1, -1, 0, 0, 0, 0]
]) * coefficient_scale,
"R_at_B_and_B_at_R": np.array([
[0, 0, 0, 0, -1.5, -1.5, 0, 0, 0, 0],
[0, 0, 0, 0, -1.5, -1.5, 0, 0, 0, 0],
[0, 0, 2, 2, 0, 0, 2, 2, 0, 0],
[0, 0, 2, 2, 0, 0, 2, 2, 0, 0],
[-1.5, -1.5, 0, 0, 6, 6, 0, 0, -1.5, -1.5],
[-1.5, -1.5, 0, 0, 6, 6, 0, 0, -1.5, -1.5],
[0, 0, 2, 2, 0, 0, 2, 2, 0, 0],
[0, 0, 2, 2, 0, 0, 2, 2, 0, 0],
[0, 0, 0, 0, -1.5, -1.5, 0, 0, 0, 0],
[0, 0, 0, 0, -1.5, -1.5, 0, 0, 0, 0]
]) * coefficient_scale,
}
def get_default_filters():
coefficient_scale = 0.125
return {
"G_at_R_and_B": np.array([
[0, 0, -1, 0, 0],
[0, 0, 2, 0, 0],
[-1, 2, 4, 2, -1],
[0, 0, 2, 0, 0],
[0, 0, -1, 0, 0]
]) * coefficient_scale,
"R_at_GR_and_B_at_GB": np.array([
[0, 0, 0.5, 0, 0],
[0, -1, 0, -1, 0],
[-1, 4, 5, 4, -1],
[0, -1, 0, -1, 0],
[0, 0, 0.5, 0, 0]
]) * coefficient_scale,
"R_at_GB_and_B_at_GR": np.array([
[0, 0, -1, 0, 0],
[0, -1, 4, -1, 0],
[0.5, 0, 5, 0, 0.5],
[0, -1, 4, -1, 0],
[0, 0, -1, 0, 0]
]) * coefficient_scale,
"R_at_B_and_B_at_R": np.array([
[0, 0, -1.5, 0, 0],
[0, 2, 0, 2, 0],
[-1.5, 0, 6, 0, -1.5],
[0, 2, 0, 2, 0],
[0, 0, -1.5, 0, 0]
]) * coefficient_scale,
}
def apply_demosaicking_filters(image, res, red_mask, green_mask, blue_mask, filters):
red_channel = image * red_mask
green_channel = image * green_mask
blue_channel = image * blue_mask
# Create the green channel after applying a filter
green_channel = np.where(
np.logical_or(red_mask == 1, blue_mask == 1),
correlate2d(image, filters['G_at_R_and_B'], mode="same", boundary="symm"),
green_channel
)
# Define masks for extracting pixel values
red_row_mask = np.any(red_mask == 1, axis=1)[:, np.newaxis].astype(np.float32)
red_col_mask = np.any(red_mask == 1, axis=0)[np.newaxis].astype(np.float32)
blue_row_mask = np.any(blue_mask == 1, axis=1)[:, np.newaxis].astype(np.float32)
blue_col_mask = np.any(blue_mask == 1, axis=0)[np.newaxis].astype(np.float32)
def update_channel(channel, row_mask, col_mask, filter_key):
return np.where(
np.logical_and(row_mask == 1, col_mask == 1),
correlate2d(image, filters[filter_key], mode="same", boundary="symm"),
channel
)
# Update the red channel and blue channel
red_channel = update_channel(red_channel, red_row_mask, blue_col_mask, 'R_at_GR_and_B_at_GB')
red_channel = update_channel(red_channel, blue_row_mask, red_col_mask, 'R_at_GB_and_B_at_GR')
blue_channel = update_channel(blue_channel, blue_row_mask, red_col_mask, 'R_at_GR_and_B_at_GB')
blue_channel = update_channel(blue_channel, red_row_mask, blue_col_mask, 'R_at_GB_and_B_at_GR')
# Update R channel and B channel again
red_channel = update_channel(red_channel, blue_row_mask, blue_col_mask, 'R_at_B_and_B_at_R')
blue_channel = update_channel(blue_channel, red_row_mask, red_col_mask, 'R_at_B_and_B_at_R')
res[:, :, 0] = red_channel
res[:, :, 1] = green_channel
res[:, :, 2] = blue_channel
return res
\ No newline at end of file
"""The main file for the reconstruction.
This file should NOT be modified except the body of the 'run_reconstruction' function.
Students can call their functions (declared in others files of src/methods/your_name).
"""
import numpy as np
from src.forward_model import CFA
from src.methods.EL_MURR_Theresa.malvar import malvar_he_cutler
def run_reconstruction(y: np.ndarray, cfa: str) -> np.ndarray:
"""Performs demosaicking on y.
Args:
y (np.ndarray): Mosaicked image to be reconstructed.
cfa (str): Name of the CFA. Can be bayer or quad_bayer.
Returns:
np.ndarray: Demosaicked image.
"""
# Performing the reconstruction.
input_shape = (y.shape[0], y.shape[1], 3)
op = CFA(cfa, input_shape)
res = malvar_he_cutler(y,op)
return res
####
####
####
#### #### #### #############
#### ###### #### ##################
#### ######## #### ####################
#### ########## #### #### ########
#### ############ #### #### ####
#### #### ######## #### #### ####
#### #### ######## #### #### ####
#### #### ######## #### #### ####
#### #### ## ###### #### #### ######
#### #### #### ## #### #### ############
#### #### ###### #### #### ##########
#### #### ########## #### #### ########
#### #### ######## #### ####
#### #### ############ ####
#### #### ########## ####
#### #### ######## ####
#### #### ###### ####
# 2023
# Authors: Mauro Dalla Mura and Matthieu Muller