train(dataMaj, dataMin)
generateSynthData(N)
for each x ∈ dataMin:
synthSet ← synthSet ⋃ B[1..M]
synthSet ← ∅
M ← 1 + ⌊N / |dataMin| ⌋
end loop for x
← synthSet[1 ..N]
B ← ∅
for k ∈ { 0 .. ⌊M / GEN⌋}:
end loop for k
B ← B ⋃ predict( NMB(x) )
NMB_old(x)
shuffle NI(x)
← { x_i ∈ dataMin : i ∈ NI(x) }
Search neighbourhood indices NI(x) for x
Fit dataMin for neighbourhood search
NMB_new(x)
shuffle NI(x)
← { x_i ∈ dataMin : i ∈ NI(x) }
Take indices NI(x) for x from stored list.
Fit dataMin for neighbourhood search
for all x ∈ dataMin
end loop for x
Search and store neighbourhood
indices NI(x) for x
BMB_old
← GEN random points from dataMaj
ignoring the value of bmbi
bmbi ← Search neighbourhood indices NI(x)
for all x ∈ dataMin
Fit dataMaj for neighbourhood search
BMB_new
← GEN random points from dataMaj
initialize
store dataMin
for x ∈ dataMin
end loop for epoch
minBatch ← NMB(x)
for epoch ∈ {0 .. epochs}
majBatch ← NMB(x)
convSamples ← predict(minBatch)
end loop for x
concatSample ← (covSamples | majBatch)
trainFit(discriminator, concatSample)
trainFit(generator, concatSample)
removed operation
unchanged operation
new operations
call to changed functions
affected loop
affected function
Speed-test:
without changes:
total: #9 (453.6)
with changnes:
total: #9 (434.6)
train: #9 (338.0s)
generateSynthData: #348 (88.2s)
NMB: #6501 (1.7s)
BMB: #3480 (0.3s)
predict@generateSynthData: #3012 (87.4s)
Changes gives speedup ~ 20s
Major time is lost during training
and for predicting.