Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
142 commits
Select commit Hold shift + click to select a range
5976296
build + formatting
kordjamshidi Oct 27, 2015
71fda57
draft ER example using SL
kordjamshidi Oct 28, 2015
7c90188
added some queries
kordjamshidi Oct 29, 2015
8c1a1b1
modified the ILP version of ER
kordjamshidi Nov 17, 2015
76304eb
modified the ILP version of ER
kordjamshidi Nov 17, 2015
61aea92
modified the ILP version of ER
kordjamshidi Nov 17, 2015
066f87c
simple SL example running
kordjamshidi Nov 17, 2015
f824e37
simple SL example running
kordjamshidi Nov 17, 2015
c354a91
modified IO manager and feature generator
kordjamshidi Nov 19, 2015
bb91303
modified IO manager and feature generator
kordjamshidi Nov 19, 2015
18531fa
modified IO manager and feature generator
kordjamshidi Nov 19, 2015
5c2f39c
modified IO manager and feature generator
kordjamshidi Nov 19, 2015
dff68b5
modified IO manager and feature generator
kordjamshidi Nov 19, 2015
bf0a450
Removed Lbjava generated files
kordjamshidi Nov 19, 2015
9d3844e
Modified classifier imports
kordjamshidi Dec 1, 2015
0f0db25
working ER given the feature errors
kordjamshidi Dec 3, 2015
3add385
working ER given the feature errors
kordjamshidi Dec 3, 2015
9ba8584
some sLtemplate
kordjamshidi Dec 4, 2015
f951370
SL_Instances
kordjamshidi Dec 7, 2015
4343790
SL-IOmanager for Saul
kordjamshidi Dec 8, 2015
146ed3e
a template for inference and loss to be debugged.
kordjamshidi Dec 8, 2015
b3fac2c
a template for inference and loss to be debugged.
kordjamshidi Dec 8, 2015
d7d015a
a template for inference and loss to be debugged.
kordjamshidi Dec 8, 2015
6eb8281
a template for feture generator to be debugged.
kordjamshidi Dec 8, 2015
292ffac
a template for feture generator to be debugged.
kordjamshidi Dec 8, 2015
1e3cbb6
a template for test_App
kordjamshidi Dec 8, 2015
6f67767
fixed typing and runtime Errors
kordjamshidi Dec 8, 2015
c3b05bc
a template for test_App
kordjamshidi Dec 9, 2015
15551ab
Added scala versions of IStructure and IInstance
kordjamshidi Dec 11, 2015
ec8da51
Added scala versions of IStructure and IInstance
kordjamshidi Dec 11, 2015
94307bd
updated with upstream
kordjamshidi Mar 22, 2016
5446e05
-trying SL input/output
kordjamshidi Mar 23, 2016
1899e60
-trying SL input/output
kordjamshidi Mar 23, 2016
c95824f
Merge branch 'forSmallPRs' into SL-Integration
kordjamshidi Mar 23, 2016
e8d8a2d
-merged with ER
kordjamshidi Mar 23, 2016
8d44eb1
-SL I/O
kordjamshidi Mar 23, 2016
30327fa
Merge remote-tracking branch 'upstream/master' into smallFixes
kordjamshidi Mar 23, 2016
dfed404
-playing with types
kordjamshidi Mar 24, 2016
c436cb5
-initial version of SL integration
kordjamshidi Mar 25, 2016
2c9ce21
format
kordjamshidi Mar 25, 2016
2556b81
-removed java/lbjava files
kordjamshidi Mar 25, 2016
537ccfb
commented out the programs that use the lbjava generated classes as d…
kordjamshidi Mar 28, 2016
75ac53e
-added a comment
kordjamshidi Mar 30, 2016
71ca9bd
fixed joinTrainig intialization
kordjamshidi Mar 30, 2016
3e4f0e5
Merge branch 'smallFixes' into SL-Integration
kordjamshidi Mar 30, 2016
2b07413
minor
kordjamshidi Mar 31, 2016
1a84d67
-initialization of join classifiers -added factors as a parameter to…
kordjamshidi Apr 1, 2016
9ab6560
added an trial lossaugmented classifier
kordjamshidi Apr 2, 2016
0eeee99
-Augmented the scores with loss.
kordjamshidi Apr 4, 2016
ab79d5e
-minor
kordjamshidi Apr 4, 2016
3288d91
-some fixes on feature generator
kordjamshidi Apr 5, 2016
724e59e
-finished feature generator
kordjamshidi Apr 10, 2016
f7945b0
Merge remote-tracking branch 'upstream/master' into SL-Integration
kordjamshidi Apr 11, 2016
900394d
-some minor trial and error
kordjamshidi Apr 15, 2016
a217d8e
added the test
kordjamshidi Apr 15, 2016
54aaac9
-update the scorer classifier inside ConstraintClassifier
kordjamshidi Apr 15, 2016
230a82c
-Some trial error
kordjamshidi Apr 15, 2016
eb3acde
-LossAugmented Normalizer added
kordjamshidi Apr 19, 2016
b840e9a
-LossAugmented classifier via rewriting the scorer in lbjava
kordjamshidi Apr 20, 2016
618d89f
Merge branch 'forSmallPRs' into SL-Integration
kordjamshidi Apr 22, 2016
c01911f
Merge branch 'forSmallPRs' into SL-Integration
kordjamshidi Apr 22, 2016
37a5caa
- refined the input output structures
kordjamshidi Apr 25, 2016
e2a61e4
Merge remote-tracking branch 'upstream/master' into SL-Integration
kordjamshidi Apr 26, 2016
31d4e43
Merge remote-tracking branch 'upstream/master' into SL-Integration
kordjamshidi Apr 26, 2016
5516bbf
minor changes to tset
kordjamshidi Apr 26, 2016
de9f2db
- some test to compare with independent models
kordjamshidi Apr 28, 2016
27d6209
-Fixed the conversion to float
kordjamshidi May 20, 2016
a40957f
-Fixed the weight update
kordjamshidi May 31, 2016
d20df41
-Format
kordjamshidi May 31, 2016
a1765eb
Merge remote-tracking branch 'upstream/master' into SL-Integration
kordjamshidi Jun 9, 2016
00594cd
-updated with upstream master
kordjamshidi Jun 9, 2016
f1eb1a0
-deleted all temporary test files
kordjamshidi Jun 9, 2016
0885b24
-updated node based SLModel rather than DM based
kordjamshidi Jun 9, 2016
66f76e1
-minor edits
kordjamshidi Jun 9, 2016
621a8e9
-made SL inference more modular
kordjamshidi Jun 10, 2016
45a0fda
-minor edits
kordjamshidi Jun 10, 2016
e225901
-LBJava version update
kordjamshidi Jun 10, 2016
8bde26c
-adapted to the new version of LBJava
kordjamshidi Jun 11, 2016
0d2b099
-renamed main file
kordjamshidi Jun 11, 2016
b437610
-minor
kordjamshidi Jun 13, 2016
3eb6142
Merge remote-tracking branch 'upstream/master' into SL-Integration
kordjamshidi Jun 13, 2016
c856fde
-evaluation oc collection of constraint classifiers.
kordjamshidi Jun 19, 2016
27c22de
-format
kordjamshidi Jun 19, 2016
6265b89
Merge remote-tracking branch 'upstream/master' into SL-Integration
kordjamshidi Jun 19, 2016
d17ddd3
format
kordjamshidi Jun 19, 2016
a68dd4a
format
kordjamshidi Jun 19, 2016
ecad86a
-added the possibility of initializing with the trained classifiers
kordjamshidi Jun 21, 2016
1e9afb3
-minor
kordjamshidi Jun 22, 2016
f7f46c5
format
kordjamshidi Jun 26, 2016
552ea2a
-added the raw test
kordjamshidi Jun 30, 2016
bb8d6ad
-fixed the messages
kordjamshidi Jul 1, 2016
6a271c0
-weight vector and factor size test
kordjamshidi Jul 1, 2016
dd9847c
-edited the SL-test
kordjamshidi Jul 2, 2016
c7fa9d7
-edited the SL-test
kordjamshidi Jul 2, 2016
2b4b02a
-removed training
kordjamshidi Jul 2, 2016
a8d6972
-added a new test : combining CoNLL-SL old test with Christos String …
kordjamshidi Jul 2, 2016
3b02648
-added some prints and check values for tests
kordjamshidi Jul 7, 2016
5486800
-removed the intermediate variable for weight update
kordjamshidi Jul 8, 2016
90342ad
-updated LBJava version
kordjamshidi Jul 12, 2016
1e240c1
-temporary debugging changes
kordjamshidi Jul 29, 2016
8ca0779
Merge remote-tracking branch 'upstream/master' into SL-Integration
kordjamshidi Sep 12, 2016
5a2096d
Merge remote-tracking branch 'upstream/master' into sl-integration
Sep 12, 2016
32ef90d
Merge pull request #1 from bhargav/sl-integration
kordjamshidi Sep 12, 2016
459116d
Merge branch 'master' of https://github.com/Rahgooy/saul into Taher-m…
kordjamshidi Sep 13, 2016
d83e610
-change from SparseNetworkLBP to SparseNetworkLearner
kordjamshidi Sep 13, 2016
cba0ed2
-fixed path to config and add data
kordjamshidi Sep 13, 2016
a7640c7
Remove SparseNetworkLBP, JoinTrainSparseNetwork file usages.
Sep 14, 2016
9cb5b78
Merge pull request #2 from bhargav/sl-integration
kordjamshidi Sep 14, 2016
83c3fec
Merge remote-tracking branch 'upstream/master' into SL-Integration
kordjamshidi Sep 16, 2016
0e52509
-add SL documentation
kordjamshidi Sep 16, 2016
831f1d9
-improve ER documentation
kordjamshidi Sep 16, 2016
ffee0b1
SAul-SL documentation
kordjamshidi Sep 16, 2016
00841b0
SAul-SL documentation
kordjamshidi Sep 17, 2016
96fefc1
-SL-tests modified `subject to` to be true
kordjamshidi Sep 17, 2016
320dad6
-add SL config
kordjamshidi Sep 17, 2016
a8b96ff
add config
kordjamshidi Sep 17, 2016
6eed15e
-add some documentation to the code for initialization
kordjamshidi Sep 18, 2016
9a31b39
Merge remote-tracking branch 'upstream/master' into SL-Integration
kordjamshidi Sep 21, 2016
c19acd6
Merge remote-tracking branch 'upstream/master' into SL-Integration
kordjamshidi Sep 21, 2016
40dcfa7
-use the new SparseNetworkInit module
kordjamshidi Sep 22, 2016
d4fcdf4
-some documentation
kordjamshidi Sep 25, 2016
0ff34ef
-changed configuration
kordjamshidi Sep 28, 2016
1cdf1dc
-add a traceable test before a unit test
kordjamshidi Sep 28, 2016
ff5f788
-format
kordjamshidi Sep 28, 2016
09b5a34
Merge branch 'binarySVM' into SL-Integration
kordjamshidi Oct 16, 2016
0b97c47
-added a simple constraint to the binary classifier
kordjamshidi Oct 21, 2016
68a2773
-test sparse averaged perceptron
kordjamshidi Oct 22, 2016
55e5e75
-merged with master
kordjamshidi Oct 28, 2016
77bb959
-merged with master
kordjamshidi Oct 28, 2016
f072c8e
pulled upstream and added simple constraint classifier
kordjamshidi Oct 28, 2016
6eea007
- test jointTrainSparseNetwork using the randomApp
kordjamshidi Oct 28, 2016
733cb22
-change the name
kordjamshidi Oct 28, 2016
ac1ef28
-change the name
kordjamshidi Nov 2, 2016
59d7f55
-Added badge example with binary constraint
kordjamshidi Nov 2, 2016
616ce07
-made a simple SparseNetwork Learner without join
kordjamshidi Nov 2, 2016
d3466ce
-made a simple SparseNetwork Learner without join
kordjamshidi Nov 2, 2016
f95e74f
-fixed minor
kordjamshidi Nov 2, 2016
0ec7d3e
-fixed minor
kordjamshidi Nov 2, 2016
e8d07df
-fixed minor
kordjamshidi Nov 2, 2016
94c6bf2
-call the exampleArray with true
kordjamshidi Nov 2, 2016
44a9e33
-thanks bhargav, fixed the initialization cloning
kordjamshidi Nov 3, 2016
c14755b
-loss augmented inference with SparseNetworks added
kordjamshidi Nov 4, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
/config
.DS_Store
data
*/data
Expand All @@ -13,7 +14,6 @@ target/
.projectFilesBackup
*.class
annotation-cache/
/config
*.lc
*.lex
*~
Expand Down
7 changes: 5 additions & 2 deletions build.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,8 @@ lazy val commonSettings = Seq(
name := "saul-project",
resolvers ++= Seq(
Resolver.mavenLocal,
"CogcompSoftware" at "http://cogcomp.cs.illinois.edu/m2repo/"
"CogcompSoftware" at "http://cogcomp.cs.illinois.edu/m2repo/",
"Sonatype Releases" at "https://oss.sonatype.org/content/repositories/releases/"
),
javaOptions ++= List("-Xmx11g"),
libraryDependencies ++= Seq(
Expand All @@ -66,7 +67,9 @@ lazy val commonSettings = Seq(
"com.gurobi" % "gurobi" % "6.0",
"org.apache.commons" % "commons-math3" % "3.0",
"org.scalatest" % "scalatest_2.11" % "2.2.4",
"ch.qos.logback" % "logback-classic" % "1.1.7"
"ch.qos.logback" % "logback-classic" % "1.1.7",
"org.scalanlp" %% "breeze-viz" % "0.12",
"edu.illinois.cs.cogcomp" % "illinois-sl" % "1.3.6" withSources
),
fork := true,
connectInput in run := true,
Expand Down
26 changes: 26 additions & 0 deletions config/DCD.config
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# {L2LossSSVM, StructuredPerceptron}
# LEARNING_MODEL = L2LossSSVM
LEARNING_MODEL = StructuredPerceptron
MAX_NUM_ITER = 20
# {DCDSolver, ParallelDCDSolver, DEMIParallelDCDSolver};
L2_LOSS_SSVM_SOLVER_TYPE = DCDSolver

NUMBER_OF_THREADS = 1
C_FOR_STRUCTURE = 1.0
TRAINMINI = false
TRAINMINI_SIZE = 1000
STOP_CONDITION = 0.0001
CHECK_INFERENCE_OPT = false
# MAX_NUM_ITER = 250
PROGRESS_REPORT_ITER = 10
INNER_STOP_CONDITION = 0.00001
MAX_ITER_INNER = 250
MAX_ITER_INNER_FINAL = 2500
TOTAL_NUMBER_FEATURE = -1
CLEAN_CACHE = true
CLEAN_CACHE_ITER = 5
DEMIDCD_NUMBER_OF_UPDATES_BEFORE_UPDATE_BUFFER = 100
DEMIDCD_NUMBER_OF_INF_PARSE_BEFORE_UPDATE_WV = 10
LEARNING_RATE = 0.01
DECAY_LEARNING_RATE = false
NUMBER_OF_FEATURE_BITS = 26
1 change: 1 addition & 0 deletions saul-core/doc/INSTALLATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,3 +81,4 @@ We suggest using [IntelliJ IDEA](https://www.jetbrains.com/idea/download/).

If you are interested in contributing to the Saul project, either by your ideas or codes, you are welcome
to create pull requests here.

Original file line number Diff line number Diff line change
Expand Up @@ -137,8 +137,13 @@ object ClassifierUtils extends Logging {
logger.info(evalSeparator)
testResults
}
}

def apply1[T <: AnyRef, H <: AnyRef](insts: Iterable[T], cls: ConstrainedClassifier[T, H]): Results = {
println(evalSeparator)
println("Evaluating " + cls.getClassSimpleNameForClassifier)
cls.test(insts)
}
}
object ForgetAll {
def apply(c: Learnable[_]*): Unit = {
c.foreach((x: Learnable[_]) => x.forget())
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,16 @@ import edu.illinois.cs.cogcomp.lbjava.classify.{ Classifier, FeatureVector, Test
import edu.illinois.cs.cogcomp.infer.ilp.{ GurobiHook, ILPSolver, OJalgoHook }
import edu.illinois.cs.cogcomp.lbjava.infer.{ BalasHook, FirstOrderConstraint, InferenceManager }
import edu.illinois.cs.cogcomp.lbjava.learn.Learner
import edu.illinois.cs.cogcomp.saul.classifier.SL_model.LossAugmentedNormalizer
import edu.illinois.cs.cogcomp.saul.classifier.infer.InferenceCondition
import edu.illinois.cs.cogcomp.saul.constraint.LfsConstraint
import edu.illinois.cs.cogcomp.saul.datamodel.edge.Edge
import edu.illinois.cs.cogcomp.saul.lbjrelated.{ LBJClassifierEquivalent, LBJLearnerEquivalent }
import edu.illinois.cs.cogcomp.saul.parser.IterableToLBJavaParser
import edu.illinois.cs.cogcomp.saul.test.TestWithStorage
import edu.illinois.cs.cogcomp.saul.util.Logging

import scala.collection.mutable
import scala.reflect.ClassTag

/** The input to a ConstrainedClassifier is of type `T`. However given an input, the inference is based upon the
Expand All @@ -31,7 +34,7 @@ abstract class ConstrainedClassifier[T <: AnyRef, HEAD <: AnyRef](val onClassifi
implicit val headType: ClassTag[HEAD]
) extends LBJClassifierEquivalent with Logging {

type LEFT = T
final type LEFT = T
type RIGHT = HEAD

def className: String = this.getClass.getName
Expand Down Expand Up @@ -69,26 +72,30 @@ abstract class ConstrainedClassifier[T <: AnyRef, HEAD <: AnyRef](val onClassifi
val l = pathToHead.get.forward.neighborsOf(x).toSet.toList

if (l.isEmpty) {
logger.error("Warning: Failed to find head")
// logger.error("Warning: Failed to find head")
None
} else if (l.size != 1) {
logger.warn("Find too many heads")
// logger.warn("Find too many heads")
Some(l.head)
} else {
logger.info(s"Found head ${l.head} for child $x")
/// logger.info(s"Found head ${l.head} for child $x")
Some(l.head)
}
}
}

def getCandidates(head: HEAD): Seq[T] = {
// def getMultiCandidates(head: Seq[HEAD]): Seq[LEFT] = {
// head.flatMap(h => getCandidates(h)).distinct
// }

def getCandidates(head: HEAD): Seq[LEFT] = {
if (tType.equals(headType) || pathToHead.isEmpty) {
head.asInstanceOf[T] :: Nil
} else {
val l = pathToHead.get.backward.neighborsOf(head)

if (l.isEmpty) {
logger.error("Failed to find part")
// logger.error("Failed to find part")
Seq.empty[T]
} else {
l.filter(filter(_, head)).toSeq
Expand All @@ -103,7 +110,7 @@ abstract class ConstrainedClassifier[T <: AnyRef, HEAD <: AnyRef](val onClassifi
var inference = InferenceManager.get(name, head)
if (inference == null) {
inference = infer(head)
logger.warn(s"Inference ${name} has not been cached; running inference . . . ")
// logger.warn(s"Inference ${name} has not been cached; running inference . . . ")
InferenceManager.put(name, inference)
}
inference.valueOf(cls, t)
Expand All @@ -113,6 +120,16 @@ abstract class ConstrainedClassifier[T <: AnyRef, HEAD <: AnyRef](val onClassifi
}
}

def lossAugmentedInfer(h: HEAD, offset: Int): mutable.ListBuffer[String] = {
var v = mutable.ListBuffer[String]()
getCandidates(h).foreach {
(example) =>
// val g1 = onClassifier.scores(example)
v += buildWithConstraint(subjectTo.createInferenceCondition[T](getSolverInstance(), new LossAugmentedNormalizer(offset, onClassifier.classifier, example)).convertToType[T], onClassifier.classifier)(example)
}
v
}

def buildWithConstraint(inferenceCondition: InferenceCondition[T, HEAD])(t: T): String = {
buildWithConstraint(inferenceCondition, onClassifier.classifier)(t)
}
Expand Down Expand Up @@ -141,7 +158,8 @@ abstract class ConstrainedClassifier[T <: AnyRef, HEAD <: AnyRef](val onClassifi
.orElse({
onClassifier match {
case clf: Learnable[T] => Some(clf.node)
case _ => logger.error("pathToHead is not provided and the onClassifier is not a Learnable!"); None
case _ => None
// logger.error("pathToHead is not provided and the onClassifier is not a Learnable!"); None
}
})
.map(node => node.getTestingInstances)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,16 +18,16 @@ object JointTrainSparseNetwork {

val logger: Logger = LoggerFactory.getLogger(this.getClass)
var difference = 0
def apply[HEAD <: AnyRef](node: Node[HEAD], cls: List[ConstrainedClassifier[_, HEAD]], init: Boolean)(implicit headTag: ClassTag[HEAD]) = {
train[HEAD](node, cls, 1, init)
def apply[HEAD <: AnyRef](node: Node[HEAD], cls: List[ConstrainedClassifier[_, HEAD]], init: Boolean,lossAugmented: Boolean)(implicit headTag: ClassTag[HEAD]) = {
train[HEAD](node, cls, 1, init,lossAugmented)
}

def apply[HEAD <: AnyRef](node: Node[HEAD], cls: List[ConstrainedClassifier[_, HEAD]], it: Int, init: Boolean)(implicit headTag: ClassTag[HEAD]) = {
train[HEAD](node, cls, it, init)
def apply[HEAD <: AnyRef](node: Node[HEAD], cls: List[ConstrainedClassifier[_, HEAD]], it: Int, init: Boolean, lossAugmented: Boolean=false)(implicit headTag: ClassTag[HEAD]) = {
train[HEAD](node, cls, it, init, lossAugmented)
}

@scala.annotation.tailrec
def train[HEAD <: AnyRef](node: Node[HEAD], cls: List[ConstrainedClassifier[_, HEAD]], it: Int, init: Boolean)(implicit headTag: ClassTag[HEAD]): Unit = {
def train[HEAD <: AnyRef](node: Node[HEAD], cls: List[ConstrainedClassifier[_, HEAD]], it: Int, init: Boolean, lossAugmented: Boolean=false)(implicit headTag: ClassTag[HEAD]): Unit = {
// forall members in collection of the head (dm.t) do
logger.info("Training iteration: " + it)
if (init) ClassifierUtils.InitializeClassifiers(node, cls: _*)
Expand All @@ -43,19 +43,24 @@ object JointTrainSparseNetwork {
if (idx % 5000 == 0)
logger.info(s"Training: $idx examples inferred.")

cls.foreach {
case classifier: ConstrainedClassifier[_, HEAD] =>
val typedClassifier = classifier.asInstanceOf[ConstrainedClassifier[_, HEAD]]
val oracle = typedClassifier.onClassifier.getLabeler
if (lossAugmented)
cls.foreach{ cls_i =>
cls_i.onClassifier.classifier.setLossFlag()
cls_i.onClassifier.classifier.setCandidates(cls_i.getCandidates(h).size * cls.size)
}

typedClassifier.getCandidates(h) foreach {
cls.foreach {
currentClassifier: ConstrainedClassifier[_, HEAD] =>
val oracle = currentClassifier.onClassifier.getLabeler
val baseClassifier = currentClassifier.onClassifier.classifier.asInstanceOf[SparseNetworkLearner]
currentClassifier.getCandidates(h) foreach {
candidate =>
{
def trainOnce() = {
val result = typedClassifier.classifier.discreteValue(candidate)

val result = currentClassifier.classifier.discreteValue(candidate)
val trueLabel = oracle.discreteValue(candidate)
val ilearner = typedClassifier.onClassifier.classifier.asInstanceOf[SparseNetworkLearner]
val lLexicon = typedClassifier.onClassifier.getLabelLexicon
val lLexicon = currentClassifier.onClassifier.getLabelLexicon
var LTU_actual: Int = 0
var LTU_predicted: Int = 0
for (i <- 0 until lLexicon.size()) {
Expand All @@ -69,26 +74,26 @@ object JointTrainSparseNetwork {
// and the LTU of the predicted class should be demoted.
if (!result.equals(trueLabel)) //equals("true") && trueLabel.equals("false") )
{
val a = typedClassifier.onClassifier.getExampleArray(candidate)
val a = currentClassifier.onClassifier.getExampleArray(candidate)
val a0 = a(0).asInstanceOf[Array[Int]] //exampleFeatures
val a1 = a(1).asInstanceOf[Array[Double]] // exampleValues
val exampleLabels = a(2).asInstanceOf[Array[Int]]
val label = exampleLabels(0)
var N = ilearner.getNetwork.size
var N = baseClassifier.getNetwork.size

if (label >= N || ilearner.getNetwork.get(label) == null) {
val conjugateLabels = ilearner.isUsingConjunctiveLabels | ilearner.getLabelLexicon.lookupKey(label).isConjunctive
ilearner.setConjunctiveLabels(conjugateLabels)
if (label >= N || baseClassifier.getNetwork.get(label) == null) {
val conjugateLabels = baseClassifier.isUsingConjunctiveLabels | baseClassifier.getLabelLexicon.lookupKey(label).isConjunctive
baseClassifier.setConjunctiveLabels(conjugateLabels)

val ltu: LinearThresholdUnit = ilearner.getBaseLTU
ltu.initialize(ilearner.getNumExamples, ilearner.getNumFeatures)
ilearner.getNetwork.set(label, ltu)
val ltu: LinearThresholdUnit = baseClassifier.getBaseLTU.clone().asInstanceOf[LinearThresholdUnit]
ltu.initialize(baseClassifier.getNumExamples, baseClassifier.getNumFeatures)
baseClassifier.getNetwork.set(label, ltu)
N = label + 1
}

// test push
val ltu_actual = ilearner.getLTU(LTU_actual).asInstanceOf[LinearThresholdUnit]
val ltu_predicted = ilearner.getLTU(LTU_predicted).asInstanceOf[LinearThresholdUnit]
val ltu_actual = baseClassifier.getLTU(LTU_actual).asInstanceOf[LinearThresholdUnit]
val ltu_predicted = baseClassifier.getLTU(LTU_predicted).asInstanceOf[LinearThresholdUnit]

if (ltu_actual != null)
ltu_actual.promote(a0, a1, 0.1)
Expand All @@ -100,8 +105,13 @@ object JointTrainSparseNetwork {
trainOnce()
}
}

}
}
if (lossAugmented)
cls.foreach{ cls_i =>
cls_i.onClassifier.classifier.unsetLossFlag()
}
}
train(node, cls, it - 1, false)
}
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
/** This software is released under the University of Illinois/Research and Academic Use License. See
* the LICENSE file in the root folder for details. Copyright (c) 2016
*
* Developed by: The Cognitive Computations Group, University of Illinois at Urbana-Champaign
* http://cogcomp.cs.illinois.edu/
*/
package edu.illinois.cs.cogcomp.saul.classifier.SL_model

import edu.illinois.cs.cogcomp.lbjava.learn.{ LinearThresholdUnit, SparseNetworkLearner }
import edu.illinois.cs.cogcomp.saul.classifier.infer.InitSparseNetwork
import edu.illinois.cs.cogcomp.saul.datamodel.node.Node
import edu.illinois.cs.cogcomp.sl.util.WeightVector

import scala.collection.mutable.ListBuffer

/** Created by Parisa on 4/1/16.
* Here we only make the lbjava lexicons for each onClassifier
* (i.e. the base classifier of each constraint classifier) based on the features of IInstances
*/
object Initialize {

def apply[HEAD <: AnyRef](node: Node[HEAD], model: SaulSLModel[HEAD], usePreTrained: Boolean = false): SaulSLModel[HEAD] = {

var wvLength = 0
var fullWeightList: ListBuffer[Array[Float]] = ListBuffer()

/*this means we are not reading any model into the SparseNetworks but
we forget all the models and go over the data to build the right size
for the lexicon and the right number of the ltu s*/

if (!usePreTrained)
model.Factors.foreach {
cf => InitSparseNetwork(node, cf)
}
/*In this step or we have built the lexicon by going over the data in the above block or
we use the loaded lexicons in the case of uesPreTrained == true, the goal is to build
a global weight vector using all classifiers and initialize it accordingly to have a fixed size*/

model.Factors.foreach(
x => {
val sparseNet = x.onClassifier.classifier.asInstanceOf[SparseNetworkLearner]
val lexiconSize = (sparseNet.getLexicon.size())

for (i <- 0 until sparseNet.getNetwork.size()) {

val trainedWeighs = x.onClassifier.classifier.asInstanceOf[SparseNetworkLearner].getNetwork.get(i).asInstanceOf[LinearThresholdUnit].getWeightVector
val fullWeights = Array.fill[Float](lexiconSize)(0)

if (usePreTrained) { //if we are going to initialize we get the loaded weights otherwise the weights are filled with zeros

for (j <- 0 until lexiconSize)
fullWeights(j) = trainedWeighs.getWeight(j).asInstanceOf[Float]

}
fullWeightList = fullWeightList :+ fullWeights

wvLength = wvLength + lexiconSize
}

println("lexicon size: " + sparseNet.getLexicon.size(), "* label lexicon size:", sparseNet.getLabelLexicon.size())
}
)
// wv = Concatenate_(over factors)Concatenate_(over ltu) => size(wv)=sum_(over factors)sum_(over ltu)(size(ltu_i))

val myWeight = Array(fullWeightList.flatten: _*)
val wv = new WeightVector(myWeight) // wv this is one unified weight vector of all initialized LTUs
val m = new SaulSLModel[HEAD](model.Factors.toList, fullWeightList) // lt is the list of individual weight vectors
m.wv = wv
m
/*This weigh vector is a a flat vector containing one block of weights per each ltu*/
} //end f apply
} // end of object
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
/** This software is released under the University of Illinois/Research and Academic Use License. See
* the LICENSE file in the root folder for details. Copyright (c) 2016
*
* Developed by: The Cognitive Computations Group, University of Illinois at Urbana-Champaign
* http://cogcomp.cs.illinois.edu/
*/
package edu.illinois.cs.cogcomp.saul.classifier.SL_model

import edu.illinois.cs.cogcomp.lbjava.classify.ScoreSet
import edu.illinois.cs.cogcomp.lbjava.learn.{ Learner, Normalizer, SparseNetworkLearner }

/** Created by Parisa on 4/18/16.
*/
class LossAugmentedNormalizer(cand_num: Int, c: Learner, example: AnyRef) extends Normalizer {
/** Simply returns the argument.
*
* @param scores The set of scores to normalize.
* @return The normalized set of scores.
*/
def normalize(scores: ScoreSet): ScoreSet = {
if (cand_num == 0)
print("There is no relevant component of this type in the head to be classified.")
val cf = c.asInstanceOf[SparseNetworkLearner]
val gold = cf.getLabeler.discreteValue(example)
val lLexicon = cf.getLabelLexicon

val resultS: ScoreSet = cf.scores(example) //new ScoreSet
for (i <- 0 until lLexicon.size()) {
if (lLexicon.lookupKey(i).valueEquals(gold))
resultS.put(lLexicon.lookupKey(i).getStringValue, resultS.getScore(lLexicon.lookupKey(i).getStringValue).score - (1 / (cand_num)))
else
resultS.put(lLexicon.lookupKey(i).getStringValue, resultS.getScore(lLexicon.lookupKey(i).getStringValue).score + (1 / (cand_num)))
}
resultS
}
}
Loading