The document discusses various techniques for digital image intensity transformations and histogram processing. It begins with an overview of intensity transformations versus geometric transformations. It then covers log transformations, power-law transformations, and piecewise linear transformations in detail. The document also discusses histogram equalization in depth, including its purpose, principles, and specific operations. Additionally, it compares histogram equalization to other enhancement methods like linear stretch and presents examples of when histogram equalization may fail. Finally, the document introduces fundamentals of spatial filtering, including linear spatial filtering operations using different sized box kernels.
Here in the ppt a detailed description of Image Enhancement Techniques is given which includes topics like Basic Gray level Transformations,Histogram Processing.
Enhancement using Arithmetic/Logic Operations.
image averaging and image averaging methods.
Piecewise-Linear Transformation Functions
Study on Contrast Enhancement with the help of Associate Regions Histogram Eq...IJSRD
Histogram equalization is an uncomplicated and extensively used image distinction enhancement technique. The crucial drawback of histogram equalization is it transforms the brightness of the image. To overcome this drawback, different histogram Equalization methods have been projected. These methods protect the brightness on the result image but, do not have a usual look. Therefore this paper is an attempt to bridge the gap and results after the processed Associate regions are collected into one image. The mock-up result explains that the algorithm can not only improve image information successfully but also remain the imaginative image luminance well enough to make it likely to be used in video arrangement directly.
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION cscpconf
Texture is the term used to characterize the surface of a given object or phenomenon and is an
important feature used in image processing and pattern recognition. Our aim is to compare
various Texture analyzing methods and compare the results based on time complexity and
accuracy of classification. The project describes texture classification using Wavelet Transform
and Co occurrence Matrix. Comparison of features of a sample texture with database of
different textures is performed. In wavelet transform we use the Haar, Symlets and Daubechies
wavelets. We find that, thee ‘Haar’ wavelet proves to be the most efficient method in terms of
performance assessment parameters mentioned above. Comparison of Haar wavelet and Cooccurrence
matrix method of classification also goes in the favor of Haar. Though the time
requirement is high in the later method, it gives excellent results for classification accuracy
except if the image is rotated.
Here in the ppt a detailed description of Image Enhancement Techniques is given which includes topics like Basic Gray level Transformations,Histogram Processing.
Enhancement using Arithmetic/Logic Operations.
image averaging and image averaging methods.
Piecewise-Linear Transformation Functions
Study on Contrast Enhancement with the help of Associate Regions Histogram Eq...IJSRD
Histogram equalization is an uncomplicated and extensively used image distinction enhancement technique. The crucial drawback of histogram equalization is it transforms the brightness of the image. To overcome this drawback, different histogram Equalization methods have been projected. These methods protect the brightness on the result image but, do not have a usual look. Therefore this paper is an attempt to bridge the gap and results after the processed Associate regions are collected into one image. The mock-up result explains that the algorithm can not only improve image information successfully but also remain the imaginative image luminance well enough to make it likely to be used in video arrangement directly.
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION cscpconf
Texture is the term used to characterize the surface of a given object or phenomenon and is an
important feature used in image processing and pattern recognition. Our aim is to compare
various Texture analyzing methods and compare the results based on time complexity and
accuracy of classification. The project describes texture classification using Wavelet Transform
and Co occurrence Matrix. Comparison of features of a sample texture with database of
different textures is performed. In wavelet transform we use the Haar, Symlets and Daubechies
wavelets. We find that, thee ‘Haar’ wavelet proves to be the most efficient method in terms of
performance assessment parameters mentioned above. Comparison of Haar wavelet and Cooccurrence
matrix method of classification also goes in the favor of Haar. Though the time
requirement is high in the later method, it gives excellent results for classification accuracy
except if the image is rotated.
COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...Hemantha Kulathilake
At the end of this lesson, you should be able to;
describe spatial domain of the digital image.
recognize the image enhancement techniques.
describe and apply the concept of intensity transformation.
express histograms and histogram processing.
describe image noise.
characterize the types of Noise.
describe concept of image restoration.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Mais conteúdo relacionado
Semelhante a 2024-master dityv5y65v56u4b6u64u46p 0318-25.pdf
COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...Hemantha Kulathilake
At the end of this lesson, you should be able to;
describe spatial domain of the digital image.
recognize the image enhancement techniques.
describe and apply the concept of intensity transformation.
express histograms and histogram processing.
describe image noise.
characterize the types of Noise.
describe concept of image restoration.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo DiehlPeter Udo Diehl
I'm excited to share my latest predictions on how AI, robotics, and other technological advancements will reshape industries in the coming years. The slides explore the exponential growth of computational power, the future of AI and robotics, and their profound impact on various sectors.
Why this matters:
The success of new products and investments hinges on precise timing and foresight into emerging categories. This deck equips founders, VCs, and industry leaders with insights to align future products with upcoming tech developments. These insights enhance the ability to forecast industry trends, improve market timing, and predict competitor actions.
Highlights:
▪ Exponential Growth in Compute: How $1000 will soon buy the computational power of a human brain
▪ Scaling of AI Models: The journey towards beyond human-scale models and intelligent edge computing
▪ Transformative Technologies: From advanced robotics and brain interfaces to automated healthcare and beyond
▪ Future of Work: How automation will redefine jobs and economic structures by 2040
With so many predictions presented here, some will inevitably be wrong or mistimed, especially with potential external disruptions. For instance, a conflict in Taiwan could severely impact global semiconductor production, affecting compute costs and related advancements. Nonetheless, these slides are intended to guide intuition on future technological trends.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
КАТЕРИНА АБЗЯТОВА «Ефективне планування тестування ключові аспекти та практ...QADay
Lviv Direction QADay 2024 (Professional Development)
КАТЕРИНА АБЗЯТОВА
«Ефективне планування тестування ключові аспекти та практичні поради»
https://linktr.ee/qadayua
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
3. 3.1 Review
Geometric transformation vs Intensity transformation
Spatial domain
The value at the
corresponding position
of the image does not
change, but the pixel
position changes
The pixel position in
the image does not
change, but the value
changes
5. 3.2 The key points and difficulties of this class
Be familiar with the principal techniques used for intensity transformations
Learn basic log transformations and power-law transformations
Understand the realization process of the two transformations
15. Discussion
What are the advantages and disadvantages
of these transformations?
Trial and error
A certain basis
intensity distribution
peaks and valleys
Discuss the pros and cons of these methods:
Reasonable or not
Automatic degree
Robustness
16. 3.3 Intensity Transformation
Trial and error
A certain basis
intensity distribution
Discussion
You may ask, to achieve this result, I can directly use PS, what is the meaning of
learning these transformations?
We know how to use PS to achieve effects more
quickly and accurately without a lot of trying
We can perform different transformations in
different regions
We can perform different transformations on
different grayscale ranges
20. 3.2 Histogram Processing
Image gray histogram
No spatial information involved
The same histogram distribution may
correspond to different images
Information additivity
Related to the amount of information
21. Describe image with gray histogram
The grayscale of the image is concentrated
in the brighter area, and a considerable part
of them are concentrated in the part close to
1, resulting in overexposure of the image
The pixel distribution in the image
is “polarized”, resulting in the loss
of image details
The distribution of image histogram is related to the quality of image to some extent
3.4 Histogram equalization
22. A “clear” image
The histogram reflects the clarity of the image, when it is evenly distributed, the image is “clearer”
Histogram
equalization
each gray level should have a
certain number of gray values
Different objects should have
distinguishable grayscale variations
3.4 Histogram equalization
24. original image target image
For a random
distribution transform
to uniform distribution
original histogram
target histogram
L r s
S=T(r)
𝑝𝑝(𝑟𝑟𝑖𝑖) ≠ 𝑝𝑝(𝑟𝑟𝑗𝑗) 𝑝𝑝(𝑠𝑠𝑖𝑖) = 𝑝𝑝(𝑠𝑠𝑗𝑗)
𝑃𝑃 𝑇𝑇 𝑟𝑟𝑖𝑖 < 𝑠𝑠 < 𝑇𝑇 𝑟𝑟𝑗𝑗 =
∫𝑟𝑟𝒊𝒊
𝑟𝑟𝑗𝑗
𝑝𝑝 𝑟𝑟 𝑑𝑑𝑑𝑑 =
1
𝐿𝐿−1
× (𝑇𝑇 𝑟𝑟𝒋𝒋 − 𝑇𝑇 𝑟𝑟𝑖𝑖 )
𝑖𝑖𝑖𝑖 𝑟𝑟𝑗𝑗 > 𝑟𝑟𝑖𝑖, 𝑠𝑠𝑗𝑗> 𝑠𝑠𝑖𝑖
a b
𝑃𝑃(𝑎𝑎 ≤ 𝑠𝑠 ≤ 𝑏𝑏)
𝑠𝑠 = 𝑇𝑇 𝑟𝑟 = (𝐿𝐿 − 1) �
0
𝑟𝑟
𝑝𝑝(𝑟𝑟) 𝑑𝑑𝑑𝑑
3.4 Histogram equalization
25. k refers to the different gray values in the original image
P(k) corresponds to the frequency of the value in all pixels of the original image
Histogram
( normalized )
Unique Pixel of
𝒇𝒇(𝒙𝒙,𝒚𝒚)
𝒓𝒓𝟏𝟏 𝒓𝒓𝟐𝟐 … 𝒓𝒓𝒋𝒋 … 𝒓𝒓𝒌𝒌
frequency of
𝒇𝒇(𝒙𝒙,𝒚𝒚)
𝒑𝒑𝟏𝟏 𝒑𝒑𝟐𝟐 … 𝒑𝒑𝒋𝒋 … 𝒑𝒑𝒌𝒌
Pixel of
𝒈𝒈(𝒙𝒙, 𝒚𝒚)/(𝑳𝑳 − 𝟏𝟏)
𝒑𝒑𝟏𝟏 𝒑𝒑𝟏𝟏+𝒑𝒑𝟐𝟐 … +…+𝒑𝒑𝒋𝒋 … +…+𝒑𝒑𝒌𝒌
Discrete situation: Gray value quantification The quantization value closest to its
value is taken as the final gray value
𝑠𝑠 = 𝑇𝑇 𝑟𝑟 = (𝐿𝐿 − 1) �
0
𝑟𝑟
𝑝𝑝(𝑟𝑟) 𝑑𝑑𝑑𝑑
3.4 Histogram equalization
36. 1. After histogram equalization, how does the gray level of
the new imagechange?
2. What are the advantages and disadvantages of gray
histogram equalization? (Human intervention required;
reversible; Is it valid in all cases?)
Purpose of Histogram Equalization
Principle of Histogram Equalization
Specific operation of histogram equalization
3.4 Histogram equalization
Summary and Discussion
39. 3.5 Histogram Processing
Linear stretch Histogram equalization
Transformation
function
Comparisonof
image
enhancement
Simple transformation
Can be transformed
back to the original
image
Need to manually set
parameters
Poor generality
Less information loss
Automated, no
parameters required
Unable to restore
Poor generality
42. Some improvement methods
3.5 Histogram Processing
LOCAL HISTOGRAM PROCESSING
Some differences and consistency in the
local area need to be preserved, but they
are often destroyed because the global
calculated value is obviously different from
the local calculated value.
p.150-153
45. Histogram equalization where we take any histogram, any pixel distribution, and
we match it to something which is as uniform as possible.
3.5 Histogram Processing
𝒇𝒇(𝒙𝒙,𝒚𝒚)
Input
image
𝒈𝒈(𝒙𝒙, 𝒚𝒚)
Target
image
𝑠𝑠 = 𝑇𝑇 𝑟𝑟 = �
0
𝑟𝑟
𝑝𝑝𝑟𝑟(𝑢𝑢) 𝑑𝑑𝑑𝑑
𝑠𝑠 = 𝑇𝑇 z = �
0
𝑟𝑟
𝑝𝑝𝑧𝑧(𝑣𝑣) 𝑑𝑑𝑑𝑑
Histogram matching (specification)
46. 3.5 Histogram Processing
Histogram matching (specification)
Just by doing the histogram equalizations, we can match any two desired distributions
Step1: Compute the histogram of the input image r, and do histogram equalization to
get the histogram-equalized image s1.
Step2: Compute the histogram of the target image z, and do histogram equalization
to get the histogram-equalized image s2.
Step3: For every value of s1, use the stored values of s2 from Step 2 to find the
corresponding value closest to s1 . Store these mappings from s1 to z
Step4: For every value of the image r, let the {𝑟𝑟𝑘𝑘} to be {𝑧𝑧𝑘𝑘
′
} ,by using the mappings
found in Step 3 to get the histogram-specified image.
47. The expressions of spatial domain processing
neighborhoodis of size 1 × 1
The function T can be linear or non-linear,
new gray value can be obtained by transforming
the original pixel value, or it can be obtained by
transforming the neighborhoodpixels.
48. 3.6 Fundamentals of Spatial Filtering
If a pixel value in an
image is lost (or affected
by noise), can we use the
information in other
place to estimate its
value?
It can be approximately equal to the
average of all values of the entire image
It can be approximately given by the
average of several nearby pixel values
The value of each pixel changes by
globally
or
locally Related to its location
49. Spatial filtering modifies an image by
replacing the value of each pixel by a
function of the values of the pixel and its
neighbors.
linear spatial filter
nonlinear spatial filter
𝑌𝑌 = 𝑊𝑊𝑊𝑊 + 𝑏𝑏
3.6 Fundamentals of Spatial Filtering
50. The mechanics of linear spatial
filtering
A linear spatial filter performs a sum-of-
products operation between an image f
and a filter kernel w
kernel :
an array ;
size defines the neighborhood of operation;
coefficients determine the nature of the filter;
also can be called mask, template, window
一种特征提取器
3.6 Fundamentals of Spatial Filtering
51. The mechanics of linear spatial filtering
The size of the kernel is odd, because we
must ensure that the current point we are
dealing with is in the exact center
m=2a+1; n=2b+1
3.6 Fundamentals of Spatial Filtering
52. 3.6 Fundamentals of Spatial Filtering
The mechanics of linear spatial filtering
with box kernels
of sizes 3 × 3,
11 × 11,
and 21 × 21
the larger the neighborhood, the
more pixels we are averaging
53. 3.6 Fundamentals of Spatial Filtering
Spatial correlation and convolution
Correlationconsists of moving the
center of a kernel over an image, and
computing the sum of products at each
location.
VS
spatial convolution are the same,
except that the correlation kernel is
rotated by 180°
when the values of a kernel are symmetricaboutits center, correlationand convolutionyield sameresult
54. 3.6 Fundamentals of Spatial Filtering
We can define correlation and convolution
so that every element of w(instead of just
its center) visits every pixel in f. This
requires that the starting configuration be
such that the right, lower corner of the kernel
coincides with the origin of the image.
the size of the resulting full correlation or
convolution array will be of size(by padding)
Sv ×S h :
55. 3.6 Fundamentals of Spatial Filtering
Spatial correlation and convolution
“convolving a kernel with an image” often is used to denote the sliding, sum-of-products process
Sometimes an image is filtered (i.e., convolved) sequentially, multistage filtering can be done in a
single filtering operation,
These convolution kernels can be combined and of course can be separated
57. 3.7 Smoothing (Lowpass) Spatial Filters
Average kernel
Becauserandomnoisetypicallyconsistsof sharp transitionsin intensity, an obviousapplicationof
smoothingis noisereduction.
The differencebetween each pixeland its surroundingpixels will be smaller than theoriginalones,
so smoothingfiltercan be used to smooththe imageand remove somefalsecontours.
BOX FILTER KERNELS
Smoothing is used to reduce irrelevantdetailin an image
The kernel should be normalized
with box kernels
of sizes 3 × 3,
11 × 11,
and 21 × 21
59. 3.7 Smoothing (Lowpass) Spatial Filters
Average kernel
LOWPASS GAUSSIAN FILTER KERNELS
circularlysymmetric(also
called isotropic)kernel
Distances from
the center for
various sizes of
square kernels.
圆形对称(也称为各向同性)核
60. 3.7 Smoothing (Lowpass) Spatial Filters
Average kernel
LOWPASS GAUSSIAN FILTER KERNELS
K = 1
𝜎𝜎 = 1
如果所有内核都是高斯,我们可以
在表中使用结果来计算复合内核的
标准偏差(并定义它),而无需实
际执行所有内核的卷积。
If all kernels are Gaussian, we can use the
composite kernel (and define it), without actually
performing the convolution of all kernels.
61. 3.7 Smoothing (Lowpass) Spatial Filters
Average kernel
kernel of size 21 × 21,
standard deviations 3.5
kernel of size 43 × 43,
standard deviations 3.5
box kernels
of sizes 11 × 11,
21 × 21
Comparison
62. 3.7 Smoothing (Lowpass) Spatial Filters
Average kernel Comparison
with a box kernel of size 71 × 71
Gaussian kernel of size 151
× 151, with K = 1 and 𝜎𝜎= 25
• box filter producedlinearsmoothing, with the transitionfrom blackto whitehavingthe shapeof a ramp
• the Gaussianfilter yieldedsignificantlysmoother results aroundthe edge transitions
63. 3.7 Smoothing (Lowpass) Spatial Filters
Applications
Using lowpass filtering and thresholding for region extraction
2566 × 2758 Hubble Telescope
image
Result of lowpass filtering
with a Gaussian kernel
size 151 × 151, 𝜎𝜎 = 25
Result of thresholding the filtered
image
Average kernel
64. 3.7 Smoothing (Lowpass) Spatial Filters
Applications
Shading correction using lowpass filtering
Lowpass filtering is a rugged, simple
method for estimating shading patterns
512 × 512 Gaussian kernel (four
times the size of squares), K = 1, and
𝜎𝜎= 128 (equal to the size of squares)
Average kernel
65. 3.7 Smoothing (Lowpass) Spatial Filters
Order-statistic (nonlinear) filters
response is based on ordering (ranking) the pixels contained in the region
encompassed by the filter
Smoothing is achieved by replacing the value of the center pixel with the value
determined by the ranking result.
median filter: replaces the value of the center pixel by the median of the intensity
values in the neighborhood of that pixel
FORCE POINTS TO BE MORE LIKE THEIR NEIGHBORS
median filter
66. 3.7 Smoothing (Lowpass) Spatial Filters
Order-statistic (nonlinear) filters median filter
image corrupted by salt-
and-pepper noise
result using 19 × 19
Gaussian lowpass filter
kernel with 𝜎𝜎= 3
result using 7 × 7
median filter
67. 3.8 Sharpening (Highpass) Spatial Filters
Distribution of grayscale changes in the image
Scan line
The gray distribution of the image
in the direction of the scan line
First derivative
Second derivative
69. 3.8 Sharpening (Highpass) Spatial Filters
The gradientof an image f at coordinates(x, y) is defined as the two dimensional column vector
Image gradient
The magnitude (length) of vector f , denotedas M(x, y)
First derivative
70. 3.8 Sharpening (Highpass) Spatial Filters
Image gradient: derivative operation --> differential operation
For discrete images, differentiation can be approximated by difference
||𝛻𝛻𝑓𝑓|| = (𝑓𝑓 𝑥𝑥,𝑦𝑦 − 𝑓𝑓 𝑥𝑥 + 1, 𝑦𝑦 )2+(𝑓𝑓 𝑥𝑥, 𝑦𝑦 − 𝑓𝑓 𝑥𝑥, 𝑦𝑦 + 1 )2
computationally to approximate the squares and square root operations by absolute values
𝛻𝛻𝑓𝑓 ≈ 𝑓𝑓 𝑥𝑥, 𝑦𝑦 − 𝑓𝑓 𝑥𝑥 + 1,𝑦𝑦 + |𝑓𝑓 𝑥𝑥, 𝑦𝑦 − 𝑓𝑓 𝑥𝑥, 𝑦𝑦 + 1 |
The magnitude of the gradient is approximated as the (absolute) sum
of the adjacent pixel differencesalong the horizontal and vertical axes
71. 3.8 Sharpening (Highpass) Spatial Filters
① The pixel value of the new image is directly replaced by the gradient of the original image
② The output image is according to the gradient threshold
Image Sharpening 图像锐化
72. 3.8 Sharpening (Highpass) Spatial Filters
Image Sharpening using gradient
The edges of the image are enhanced, and some noise is also amplified
73. Robert Operator
3.8 Sharpening (Highpass) Spatial Filters
The differential sum of the two
directions after rotating ±45°
The area involved in the calculation is too
small, and the obtained edge is weak
Image Sharpening using gradient
74. 3.8 Sharpening (Highpass) Spatial Filters
Image Sharpening using gradient 3*3 kernel
image
x
y
Maintaining directional consistency in the calculation, 3*3 can be viewed as a
superposition of multiple 2*2 regions with respect to the current pixel position.
82. 3.8 Sharpening (Highpass) Spatial Filters
second-order derivative of f (x)
Flexible extensions of the Laplace operator
1 -2 1
-2 4 -2
1 -2 1
Background features can be “recovered” while still preserving the sharpening effect of the
Laplacian by adding the Laplacian image to the original.
Let c = −1
91. 3.8 Combining Spatial Enhancement Methods
a nuclear
whole body
bone scan
image
Objective: show more of the skeletal detail
method: enhance the edges
Laplacian of image Sharpened image
92. 3.8 Combining Spatial Enhancement Methods
Objective: show more of the skeletal detail
method: enhance the edges and suppress noise
Sobel gradient of image Sobel image smoothed with a 5 × 5 box filter
Mask image formed by
the product of (b) and (e).
93. 3.8 Combining Spatial Enhancement Methods
Objective: show more of the skeletal detail method: enhance the edges and suppress noise
Sharpened image obtained
by the adding images (a) and (f).
95. Homework Deadline: before 9 April
1. Consider that the maximum value of an image 𝑰𝑰𝟏𝟏is M and its minimum is m
(m≠M). An intensity transform that maps the image 𝑰𝑰𝟏𝟏 onto 𝑰𝑰𝟐𝟐 such that the
maximal value of 𝑰𝑰𝟐𝟐 is L and the minimal value is:
2. Why global discrete histogram equalization does not, in general, yield a flat
(uniform) histogram?
A Because images are in color.
B Becausethe histogramequalizationmathematicalderivationdoesn’texist for discretesignals.
C In global histogramequalization, all pixels with the same value are mapped to same value.
D Actually, global discretehistogramequalizationalways yields flat histograms by definition.
96. Homework
3. Discrete histogram equalization is an invertible operation, meaning we can
recover the original image from the equalized one by inverting the operation,
since?
A Actually, histogram equalization is in general non-invertible.
B There is a unique histogram equalization formula per image.
C Pixels with different values are mapped to pixels with different values.
D Images have unique histograms.
4. Given an image with only 3 pixels and 4 possible values for each one. Determine
the number of possible different images and the number of possible different
histograms. How many images and histograms?
97. Homework
5. This image is a 6*6 grayscale image I(x, y) , with 4 gray levels
(x = 0, 1, 2, ... 5; y = 0, 1, 2, ..., 5) , the value of each point in the
figure represents the gray value of the image pixels.
1) Calculate the histogram of the image
2) Using histogram equalization to process this image (write the
process details )
3) Write the new histogram after histogram equalization.
98. Homework
6. Which integer number minimizes
7. Which integer number minimizes
8. Applying a 3×3 averaging filter to an image a large (infinity) number of times is:
A Equivalent to replacing all the pixel values by 0..
B Equivalent to replacing all the pixel values by the average of the values in the
original image.
C The same as applying it a single time.
D The same as applying a median filter.
99. 9. In the original image used to generate the three blurred images shown, the vertical
bars are 5 pixels wide, 100 pixels high, and their separation is 20 pixels. The image was
blurred using square box kernels of sizes 23, 25, and 45 elements on the side,
respectively. The vertical bars on the left, lower part of (a) and (c) are blurred, but a
clear separation exists between them. However, the bars have merged in image (b),
despite the fact that the kernel used to generate this image is much smaller than the
kernel that produced image (c). Explain the reason for this.
Homework