Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add the Chan-Vese Segmentation for Gray #84

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ version = "1.6.0"
Clustering = "aaaa29a8-35af-508c-8bc3-b662a17a0fe5"
DataStructures = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
Distances = "b4f34e82-e78d-54a5-968a-f98e89d6e8f7"
ImageBase = "c817782e-172a-44cc-b673-b171935fbb9e"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Every dependency requires a corresponding compat entry.

ImageCore = "a09fc81d-aa75-5fe9-8630-4744c3626534"
ImageFiltering = "6a3955dd-da59-5b1f-98d4-e7296123deb5"
ImageMorphology = "787d08f9-d448-5407-9aad-5290dd7ab264"
Expand Down Expand Up @@ -38,6 +39,8 @@ FileIO = "5789e2e9-d7fb-5bc7-8068-2c6fae9b9549"
ImageMagick = "6218d12a-5da1-5696-b52f-db25d2ecc6d1"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
TestImages = "5e47fb64-e119-507b-a336-dd2b206d9990"
ImageTransformations = "02fcd773-0e25-5acc-982a-7f6622650795"

[targets]
test = ["Documenter", "FileIO", "ImageMagick", "SparseArrays", "Test"]
test = ["Documenter", "FileIO", "ImageMagick", "SparseArrays", "Test", "TestImages", "ImageTransformations"]
3 changes: 3 additions & 0 deletions src/ImageSegmentation.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import Base: show
using LinearAlgebra, Statistics
using DataStructures, StaticArrays, ImageCore, ImageFiltering, ImageMorphology, LightGraphs, SimpleWeightedGraphs, RegionTrees, Distances, StaticArrays, Clustering, MetaGraphs
using ImageCore.ColorVectorSpace: MathTypes
using ImageBase.ImageCore: GenericGrayImage, GenericImage
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should either remove ImageCore from Project.toml or use

Suggested change
using ImageBase.ImageCore: GenericGrayImage, GenericImage
using ImageCore: GenericGrayImage, GenericImage

import Clustering: kmeans, fuzzy_cmeans

const PairOrTuple{K,V} = Union{Pair{K,V},Tuple{K,V}}
Expand All @@ -21,6 +22,7 @@ include("meanshift.jl")
include("clustering.jl")
include("merge_segments.jl")
include("deprecations.jl")
include("chan_vese.jl")

export
#accessor methods
Expand Down Expand Up @@ -49,6 +51,7 @@ export
kmeans,
fuzzy_cmeans,
merge_segments,
chan_vese,

# types
SegmentedImage,
Expand Down
241 changes: 241 additions & 0 deletions src/chan_vese.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,241 @@
"""
chan_vese(img; [μ], [λ₁], [λ₂], [tol], [max_iter], [Δt], [reinitial_flag])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to update the keywords for normalize and init_level_set.


Segments image `img` by evolving a level set. An active contour model
which can be used to segment objects without clearly defined boundaries.
Comment on lines +4 to +5
Copy link
Member

@johnnychen94 johnnychen94 Sep 29, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be worth documenting that this algorithm supports 3d images.

If you don't plan to add RGB support in this PR then might also worth mentioning that RGB image is not supported.


# output
Return a `BitMatrix`.

# Details

Chan-Vese algorithm deals quite well even with images which are quite
difficult to segment. Since CV algorithm relies on global properties,
rather than just taking local properties under consideration, such as
gradient. Better robustness for noise is one of the main advantages of
this algorithm. See [1], [2], [3] for more details.

# Options

The function argument is described in detail below.

Denote the edge set curve with 𝐶 in the following part.

## `μ::Float64`

The argument `μ` is a weight controlling the penalty on the total length
of the curve 𝐶;

For example, if the boundaries of the image are quite smooth, a larger `μ`
can prevent 𝐶 from being a complex curve.

Default: 0.25

## `λ₁::Float64`, `λ₂::Float64`

The argument `λ₁` and `λ₂` affect the desired uniformity inside 𝐶 and
outside 𝐶, respectively.

For example, if set `λ₁` < `λ₂`, we are more possible to get result with
quite uniform background and varying grayscale objects in the foreground.

Default: λ₁ = 1.0
λ₂ = 1.0

## `tol::Float64`

The argument `tol` controls the level set variation tolerance between
iteration. If the L2 norm difference between two level sets of adjacent
iterations is below `tol`, then the solution will be assumed to be reached.

Default: 1e-3

## `max_iter::Int`

The argument `max_iter` controls the maximum of iteration number.

Default: 500

## `Δt::Float64`

The argument `Δt` is a multiplication factor applied at calculations
for each step, serves to accelerate the algorithm. Although larger `Δt`
can speed up the algorithm, it might prevent algorithm from converging to
the solution.

Default: 0.5

## reinitial_flag::Bool

The arguement `reinitial_flag` controls whether to reinitialize the
level set in each step.

Default: false

# Examples

```julia
using TestImages
using ImageSegmentation

img = testimage("cameraman")

cv_result = chan_vese(img, max_iter=200)
```

# References

[1] An Active Contour Model without Edges, Tony Chan and Luminita Vese,
Scale-Space Theories in Computer Vision, 1999, :DOI:`10.1007/3-540-48236-9_13`
[2] Chan-Vese Segmentation, Pascal Getreuer Image Processing On Line, 2 (2012),
pp. 214-224, :DOI:`10.5201/ipol.2012.g-cv`
[3] The Chan-Vese Algorithm - Project Report, Rami Cohen, 2011 :arXiv:`1107.2782`
"""
function chan_vese(img::GenericGrayImage;
μ::Float64=0.25,
johnnychen94 marked this conversation as resolved.
Show resolved Hide resolved
λ₁::Float64=1.0,
λ₂::Float64=1.0,
tol::Float64=1e-3,
max_iter::Int=500,
Δt::Float64=0.5,
reinitial_flag::Bool=false)
# Signs used in the codes and comments mainly follow paper[3] in the References.
img = float64.(channelview(img))
Copy link
Member

@timholy timholy Sep 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now I see where you are getting the 3d handling from: it's not actually 3d, it's your color channel. This is almost never needed in JuliaImages; just use the image as provided, and if you've written everything generically it should "just work." That is, for a grayscale image c₁ will be a number or Gray, but for RGB c₁ will be an RGB. And then you can truly support both 2d and 3d images by writing your code in a manner that generalizes across dimensions. So much easier than how other languages do these things, no?

Keep in mind that Python and Julia have reversed "fast axes" for their arrays (Python has its fastest axis last, Julia first), but that in fact the memory layout is the same. Consequently, in Julia if you do call channelview on an RGB image, the color channel is first, not last.

Copy link
Member

@johnnychen94 johnnychen94 Sep 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want a raw numerical array, then just:

Suggested change
img = float64.(channelview(img))
img = Float64.(img)

But generally, we encourage directly handling Array{<:Colorant} without channelview because it permits more generic implementation as Tim said in the previous comment.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend deleting this line and the minimum/maximum standardization unless there is strong reason to keep it. Other languages may have done this just to commonize among UInt8 images (ranging from 0:255), UInt16 images (ranging from 0:65535), and Float32/Float64 images (ranging from 0 to 1). But thanks to N0f8 and N0f16, we've already done that. I think it's better to accept the image at face value.

N = ndims(img)
iter = 0
h = 1.0
del = tol + 1
img .= img .- minimum(img)

if maximum(img) != 0
img .= img ./ maximum(img)
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this standardization needed?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

About standardization, I think a standardized image may contribute to a segmentation algorithm since it makes the gray levels of image more dispersive. If the grey levels of image range from 0.2 to 0.8, it seems better to standardize image so that the grey levels will range from 0.0 to 1.0.
The test of chan_vese use a resized 64*64 cameraman as the test image, whose grey levels range from 0.02 to 0.988. The standardization has showed that it will cause some difference :

julia> using TestImages, Images

julia> img_512 = testimage("cameraman");

julia> maximum(img_512)
Gray{N0f8}(1.0)

julia> minimum(img_512)
Gray{N0f8}(0.0)

julia> img_64 = imresize(testimage("cameraman"), (64, 64));

julia> maximum(img)
Gray{N0f8}(0.988)

julia> minimum(img)
Gray{N0f8}(0.02)

Test image shows as following:
cameraman_64

julia> res_no_std = chan_vese(img, μ=0.1, tol=1e-2, max_iter=200);

julia> sum(res_no_std)
1075

julia> colorview(Gray, res_no_std)

res_64_no_std

julia> res_std = chan_vese(img, μ=0.1, tol=1e-2, max_iter=200);

julia> sum(res_std)
1064

julia> colorview(Gray, res_std)

res_64_std

We can find that there are some difference. Without standardization, the part between the man's leg can't be segmented well. So maybe we have to maintain the standardization?

Copy link
Member

@johnnychen94 johnnychen94 Sep 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can introduce a keyword normalize=false to control this behavior. And we can also refer to ImageContrastAdjustment in the docstring for more complicated normalization workflows.

By hardcoding the standardization in the implementation we lose some possibilities. In many similar cases, we prefer to let the caller rather than the callee do the work. See also https://docs.julialang.org/en/v1/manual/style-guide/#Handle-excess-argument-diversity-in-the-caller

Copy link
Member

@timholy timholy Sep 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Presumably that is easily compensated by changing the values of λ. The only place in the mathematics where the image magnitude means anything is the λ*sum(abs2, img[componentmask] .- componentmean). So anything you can do by standardizing the image, you can do more quickly by adjusting λ. Alternatively, you can adjust μ since only the ratio of the terms really matters for the final result.

What does standardization even mean for an image with RGB pixels? Do you take the min & max across channels too? Why does the magnitude of the blue channel scale the magnitude of the red one? Aren't these independent pieces of information? Conversely, if my raw image has almost no diversity in the green channel, I'll be really surprised if standardization on the green channel converts something that is perceptually insignificant into something that drives the segmentation.

Basically, there isn't an answer to such questions. So it's better to the let the user be in charge.


# Precalculation of some constants which helps simplify some integration
# area = length(img) # area = ∫H𝚽 + ∫H𝚽ⁱ
area = length(img) # area = ∫H𝚽 + ∫H𝚽ⁱ
∫u₀ = sum(img) # ∫u₀ = ∫u₀H𝚽 + ∫u₀H𝚽ⁱ

# Initialize the level set
𝚽ⁿ = initial_level_set(size(img))

# Preallocation and initializtion
H𝚽 = trues(size(img)...)
𝚽ⁿ⁺¹ = similar(𝚽ⁿ)
johnnychen94 marked this conversation as resolved.
Show resolved Hide resolved

Δ = ntuple(d -> CartesianIndex(ntuple(i -> i == d ? 1 : 0, N)), N)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you like, you can even make this

Suggested change
Δ = ntuple(d -> CartesianIndex(ntuple(i -> i == d ? 1 : 0, N)), N)
Δ = ntuple(d -> CartesianIndex(ntuple(i -> i == d, N)), N)

since the Bool will be promoted to Int. (Optional, of course, and you may not like it better). I've probably got places I could use that myself...

idx_first = first(CartesianIndices(𝚽ⁿ))
idx_last = last(CartesianIndices(𝚽ⁿ))

while (del > tol) & (iter < max_iter)
ϵ = 1e-8
diff = 0

# Calculate the average intensities
@. H𝚽 = 𝚽ⁿ > 0 # Heaviside function
c₁, c₂ = calculate_averages(img, H𝚽, area, ∫u₀) # Compute c₁(𝚽ⁿ), c₂(𝚽ⁿ)

# Calculate the variation of level set 𝚽ⁿ
@inbounds @simd for idx in CartesianIndices(𝚽ⁿ)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@simd won't work if the function is complicated (e.g., has if-branch and uninlined functions)

Suggested change
@inbounds @simd for idx in CartesianIndices(𝚽ⁿ)
@inbounds for idx in CartesianIndices(𝚽ⁿ)

𝚽₀ = 𝚽ⁿ[idx] # 𝚽ⁿ(x, y)
u₀ = img[idx] # u₀(x, y)
𝚽₊ = broadcast(i->𝚽ⁿ[i], ntuple(d -> idx[d] != idx_last[d] ? idx + Δ[d] : idx, N))
𝚽₋ = broadcast(i->𝚽ⁿ[i], ntuple(d -> idx[d] != idx_first[d] ? idx - Δ[d] : idx, N))

# Solve the PDE of equation 9 in paper[3]
center_diff = ntuple(d -> (𝚽₊[d] - 𝚽₋[d])^2 / 4., N)
sum_center_diff = sum(center_diff)
C₊ = ntuple(d -> 1. / sqrt(ϵ + (𝚽₊[d] - 𝚽₀)^2 + sum_center_diff - center_diff[d]), N)
C₋ = ntuple(d -> 1. / sqrt(ϵ + (𝚽₋[d] - 𝚽₀)^2 + sum_center_diff - center_diff[d]), N)

K = sum(𝚽₊ .* C₊) + sum(𝚽₋ .* C₋)
δₕ = h / (h^2 + 𝚽₀^2) # Regularised Dirac function
difference_from_average = - λ₁ * (u₀ - c₁) ^ 2 + λ₂ * (u₀ - c₂) ^ 2
johnnychen94 marked this conversation as resolved.
Show resolved Hide resolved

𝚽ⁿ⁺¹[idx] = 𝚽 = (𝚽₀ + Δt * δₕ * (μ * K + difference_from_average)) / (1. + μ * Δt * δₕ * (sum(C₊) + sum(C₋)))
diff += (𝚽 - 𝚽₀)^2
end

del = sqrt(diff / area)

if reinitial_flag
# Reinitialize 𝚽 to be the signed distance function to its zero level set
reinitialize!(𝚽ⁿ⁺¹, 𝚽ⁿ, Δt, h)
else
𝚽ⁿ .= 𝚽ⁿ⁺¹
end

iter += 1
end

return 𝚽ⁿ .> 0
end

function initial_level_set(shape::Tuple{Int64, Int64})
x₀ = reshape(collect(0:shape[begin]-1), shape[begin], 1)
y₀ = reshape(collect(0:shape[begin+1]-1), 1, shape[begin+1])
𝚽₀ = @. sin(pi / 5 * x₀) * sin(pi / 5 * y₀)
end

function initial_level_set(shape::Tuple{Int64, Int64, Int64})
x₀ = reshape(collect(0:shape[begin]-1), shape[begin], 1, 1)
y₀ = reshape(collect(0:shape[begin+1]-1), 1, shape[begin+1], 1)
z₀ = reshape(collect(0:shape[begin+2]-1), 1, 1, shape[begin+2])
𝚽₀ = @. sin(pi / 5 * x₀) * sin(pi / 5 * y₀) * sin(pi / 5 * z₀)
end
Comment on lines +187 to +198
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
function initial_level_set(shape::Tuple{Int64, Int64})
x₀ = reshape(collect(0:shape[begin]-1), shape[begin], 1)
y₀ = reshape(collect(0:shape[begin+1]-1), 1, shape[begin+1])
𝚽₀ = @. sin(pi / 5 * x₀) * sin(pi / 5 * y₀)
end
function initial_level_set(shape::Tuple{Int64, Int64, Int64})
x₀ = reshape(collect(0:shape[begin]-1), shape[begin], 1, 1)
y₀ = reshape(collect(0:shape[begin+1]-1), 1, shape[begin+1], 1)
z₀ = reshape(collect(0:shape[begin+2]-1), 1, 1, shape[begin+2])
𝚽₀ = @. sin(pi / 5 * x₀) * sin(pi / 5 * y₀) * sin(pi / 5 * z₀)
end
initial_level_set(sz) = broadcast(*, kernelfactors(ntuple(i -> sin.((pi / 5) * (0:sz[i]-1)), length(sz)))

kernelfactors is from ImageFiltering and will make this truly arbitrary-dimensional.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve tried to change the code, but do I need to change the code like following to get a Matrix type output ?

initial_level_set(sz) = broadcast(*, kernelfactors(ntuple(i -> sin.((pi / 5) * (0:sz[i]-1)), length(sz)))...)

Another question is that this changing seems reduce the performance:

# previous code's performance
function initial_level_set(shape::Tuple{Int64, Int64})
    x₀ = reshape(collect(0:shape[begin]-1), shape[begin], 1)
    y₀ = reshape(collect(0:shape[begin+1]-1), 1, shape[begin+1])
    𝚽₀ = @. sin(pi / 5 * x₀) * sin(pi / 5 * y₀)
end
...
julia> @btime chan_vese(img_gray, μ=0.25, λ₁=1.0, λ₂=1.0, tol=1e-3, max_iter=200, Δt=0.5, reinitial_flag=false);
  462.876 ms (119 allocations: 6.46 MiB)

# using kernelfactor
initial_level_set(sz) = broadcast(*, kernelfactors(ntuple(i -> sin.((pi / 5) * (0:sz[i]-1)), length(sz)))...)
...
julia> @btime chan_vese(img_gray, μ=0.25, λ₁=1.0, λ₂=1.0, tol=1e-3, max_iter=200, Δt=0.5, reinitial_flag=false);
  575.990 ms (115 allocations: 6.46 MiB)

It seems that the changing causes more than 100ms's performance reduction.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shoot, that should have been

initial_level_set(sz) = broadcast(*, kernelfactors(ntuple(i -> sin.((pi / 5) * (0:sz[i]-1)), length(sz)))...)

so that

julia> initial_level_set((3, 5))
3×5 OffsetArray(::Matrix{Float64}, 1:3, 1:5) with eltype Float64 with indices 1:3×1:5:
 0.0  0.0       0.0       0.0       0.0
 0.0  0.345492  0.559017  0.559017  0.345492
 0.0  0.559017  0.904508  0.904508  0.559017

julia> initial_level_set((3, 5, 2))
3×5×2 OffsetArray(::Array{Float64, 3}, 1:3, 1:5, 1:2) with eltype Float64 with indices 1:3×1:5×1:2:
[:, :, 1] =
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0

[:, :, 2] =
 0.0  0.0       0.0       0.0       0.0
 0.0  0.203075  0.328582  0.328582  0.203075
 0.0  0.328582  0.531657  0.531657  0.328582

Sorry about that.


function calculate_averages(img::AbstractArray{T, N}, H𝚽::AbstractArray{S, N}, area::Int64, ∫u₀::Float64) where {T<:Real, S<:Bool, N}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
function calculate_averages(img::AbstractArray{T, N}, H𝚽::AbstractArray{S, N}, area::Int64, ∫u₀::Float64) where {T<:Real, S<:Bool, N}
function calculate_averages(img::AbstractArray{T, N}, H𝚽::AbstractArray{S, N}, area, ∫u₀) where {T<:Real, S<:Bool, N}

Putting these types in doesn't help you, and only prevents you from using, e.g., RGB colors for ∫u₀.

∫u₀H𝚽 = 0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
∫u₀H𝚽 = 0
∫u₀H𝚽 = zero(∫u₀)

This will make you generic.

∫H𝚽 = 0
@inbounds for i in eachindex(img)
if H𝚽[i]
∫u₀H𝚽 += img[i]
∫H𝚽 += 1
end
end
∫H𝚽ⁱ = area - ∫H𝚽
∫u₀H𝚽ⁱ = ∫u₀ - ∫u₀H𝚽
c₁ = ∫u₀H𝚽 / max(1, ∫H𝚽)
c₂ = ∫u₀H𝚽ⁱ / max(1, ∫H𝚽ⁱ)

return c₁, c₂
end

function calculate_reinitial(𝚽::AbstractArray{T, M}, 𝚿::AbstractArray{T, M}, Δt::Float64, h::Float64) where {T<:Real, M}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments and test-coverage would be appreciated here. I'm not quite sure what this is about (I haven't yet seen this in the paper but I haven't read the whole thing carefully, just skimmed it).

ϵ = 1e-8
N = ndims(𝚽)

Δ = ntuple(d -> CartesianIndex(ntuple(i -> i == d ? 1 : 0, N)), N)
idx_first = first(CartesianIndices(𝚽))
idx_last = last(CartesianIndices(𝚽))

@inbounds @simd for idx in CartesianIndices(𝚽)
𝚽₀ = 𝚽[idx] # 𝚽ⁿ(x, y)
Δ₊ = ntuple(d -> idx[d] != idx_last[d] ? idx + Δ[d] : idx, N)
Δ₋ = ntuple(d -> idx[d] != idx_first[d] ? idx - Δ[d] : idx, N)
Δ𝚽₊ = broadcast(i -> (𝚽[i] - 𝚽₀) / h, Δ₊)
Δ𝚽₋ = broadcast(i -> (𝚽₀ - 𝚽[i]) / h, Δ₋)

maxΔ𝚽₊ = max.(Δ𝚽₊, 0)
minΔ𝚽₊ = min.(Δ𝚽₊, 0)
maxΔ𝚽₋ = max.(Δ𝚽₋, 0)
minΔ𝚽₋ = min.(Δ𝚽₋, 0)

G = 0
if 𝚽₀ > 0
G += sqrt(sum(max.(minΔ𝚽₊.^2, maxΔ𝚽₋.^2))) - 1
elseif 𝚽₀ < 0
G += sqrt(sum(max.(maxΔ𝚽₊.^2, minΔ𝚽₋.^2))) - 1
end
sign𝚽 = 𝚽₀ / sqrt(𝚽₀^2 + ϵ)
𝚿[idx] = 𝚽₀ - Δt * sign𝚽 * G
end

return 𝚿
end

function reinitialize!(𝚽::AbstractArray{T, M}, 𝚿::AbstractArray{T, M}, Δt::Float64, h::Float64, max_reiter::Int=5) where {T<:Real, M}
for i in 1 : max_reiter
𝚽 .= calculate_reinitial(𝚽, 𝚿, Δt, h)
end
end
13 changes: 13 additions & 0 deletions test/chan_vese.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
@testset "chan_vese" begin
@info "Test: Chan Vese Segmentation"

@testset "Gray Image Chan-Vese Segmentation Reference Test" begin
img_gray = imresize(testimage("cameraman"), (64, 64))
ref = load("references/Chan_Vese_Gray.png")
ref = ref .> 0
out = chan_vese(img_gray, μ=0.1, tol=1e-2, max_iter=200)
@test eltype(out) == Bool
@test sum(out) == sum(ref)
@test out == ref
end
end
Binary file added test/references/Chan_Vese_Gray.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions test/runtests.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
using ImageSegmentation, ImageCore, ImageFiltering, Test, SimpleWeightedGraphs, LightGraphs, StaticArrays, DataStructures
using FileIO, ImageTransformations, TestImages
using RegionTrees: isleaf, Cell, split!
using MetaGraphs: MetaGraph, clear_props!, get_prop, has_prop, set_prop!, props, vertices

Expand All @@ -15,3 +16,4 @@ include("watershed.jl")
include("region_merging.jl")
include("meanshift.jl")
include("merge_segments.jl")
include("chan_vese.jl")