|
| 1 | +--- |
| 2 | +layout: posts |
| 3 | +classes: wide |
| 4 | +title: "Scenes-with-text Detection (v8.4)" |
| 5 | +date: 2025-12-14T01:08:09+00:00 |
| 6 | +--- |
| 7 | +## About this version |
| 8 | + |
| 9 | +- Submitter: [keighrim](https://github.com/keighrim) |
| 10 | +- Submission Time: 2025-12-14T01:08:09+00:00 |
| 11 | +- Prebuilt Container Image: [ghcr.io/clamsproject/app-swt-detection:v8.4](https://github.com/clamsproject/app-swt-detection/pkgs/container/app-swt-detection/v8.4) |
| 12 | +- Release Notes |
| 13 | + |
| 14 | + (no notes provided by the developer) |
| 15 | + |
| 16 | +## About this app (See raw [metadata.json](metadata.json)) |
| 17 | + |
| 18 | +**Detects scenes with text, like slates, chyrons and credits. This app can run in three modes, depending on `useClassifier`, `useStitcher` parameters. When `useClassifier=True`, it runs in the "TimePoint mode" and generates TimePoint annotations. When `useStitcher=True`, it runs in the "TimeFrame mode" and generates TimeFrame annotations based on existing TimePoint annotations -- if no TimePoint is found, it produces an error. By default, it runs in the 'both' mode and first generates TimePoint annotations and then TimeFrame annotations on them.** |
| 19 | + |
| 20 | +- App ID: [http://apps.clams.ai/swt-detection/v8.4](http://apps.clams.ai/swt-detection/v8.4) |
| 21 | +- App License: Apache 2.0 |
| 22 | +- Source Repository: [https://github.com/clamsproject/app-swt-detection](https://github.com/clamsproject/app-swt-detection) ([source tree of the submitted version](https://github.com/clamsproject/app-swt-detection/tree/v8.4)) |
| 23 | + |
| 24 | + |
| 25 | +#### Inputs |
| 26 | +(**Note**: "*" as a property value means that the property is required but can be any value.) |
| 27 | + |
| 28 | +- [http://mmif.clams.ai/vocabulary/VideoDocument/v1](http://mmif.clams.ai/vocabulary/VideoDocument/v1) (required) |
| 29 | +(of any properties) |
| 30 | + |
| 31 | + |
| 32 | + |
| 33 | +#### Configurable Parameters |
| 34 | +(**Note**: _Multivalued_ means the parameter can have one or more values.) |
| 35 | + |
| 36 | +- `useClassifier`: optional, defaults to `true` |
| 37 | + |
| 38 | + - Type: boolean |
| 39 | + - Multivalued: False |
| 40 | + - Choices: `false`, **_`true`_** |
| 41 | + |
| 42 | + |
| 43 | + > Use the image classifier model to generate TimePoint annotations. |
| 44 | +- `tpModelName`: optional, defaults to `convnextv2_tiny` |
| 45 | + |
| 46 | + - Type: string |
| 47 | + - Multivalued: False |
| 48 | + - Choices: `convnextv2_large`, **_`convnextv2_tiny`_** |
| 49 | + |
| 50 | + |
| 51 | + > Model name to use for classification, only applies when `useClassifier=true`. |
| 52 | +- `tpModelBatchSize`: optional, defaults to `200` |
| 53 | + |
| 54 | + - Type: integer |
| 55 | + - Multivalued: False |
| 56 | + |
| 57 | + |
| 58 | + > Number of images to process in a batch for classification. Smaller batch sizes will use less memory but may be slower. The default value of 200 is set to be the safely maximum size for "large" model running on desktop-grade GPU (12GB VRAM). Only applies when `useClassifier=true`. |
| 59 | +- `tpUsePosModel`: optional, defaults to `true` |
| 60 | + |
| 61 | + - Type: boolean |
| 62 | + - Multivalued: False |
| 63 | + - Choices: `false`, **_`true`_** |
| 64 | + |
| 65 | + |
| 66 | + > Use the model trained with positional features, only applies when `useClassifier=true`. |
| 67 | +- `tpStartAt`: optional, defaults to `0` |
| 68 | + |
| 69 | + - Type: integer |
| 70 | + - Multivalued: False |
| 71 | + |
| 72 | + |
| 73 | + > Number of milliseconds into the video to start processing, only applies when `useClassifier=true`. |
| 74 | +- `tpStopAt`: optional, defaults to `9223372036854775807` |
| 75 | + |
| 76 | + - Type: integer |
| 77 | + - Multivalued: False |
| 78 | + |
| 79 | + |
| 80 | + > Number of milliseconds into the video to stop processing, only applies when `useClassifier=true`. |
| 81 | +- `tpSampleRate`: optional, defaults to `1000` |
| 82 | + |
| 83 | + - Type: integer |
| 84 | + - Multivalued: False |
| 85 | + |
| 86 | + |
| 87 | + > Milliseconds between sampled frames, only applies when `useClassifier=true`. |
| 88 | +- `useStitcher`: optional, defaults to `true` |
| 89 | + |
| 90 | + - Type: boolean |
| 91 | + - Multivalued: False |
| 92 | + - Choices: `false`, **_`true`_** |
| 93 | + |
| 94 | + |
| 95 | + > Use the stitcher after classifying the TimePoints. |
| 96 | +- `tfMinTPScore`: optional, defaults to `0.5` |
| 97 | + |
| 98 | + - Type: number |
| 99 | + - Multivalued: False |
| 100 | + |
| 101 | + |
| 102 | + > Minimum score for a TimePoint to be included in a TimeFrame. A lower value will include more TimePoints in the TimeFrame (increasing recall in exchange for precision). Only applies when `useStitcher=true`. |
| 103 | +- `tfMinTFScore`: optional, defaults to `0.9` |
| 104 | + |
| 105 | + - Type: number |
| 106 | + - Multivalued: False |
| 107 | + |
| 108 | + |
| 109 | + > Minimum score for a TimeFrame. A lower value will include more TimeFrames in the output (increasing recall in exchange for precision). Only applies when `useStitcher=true` |
| 110 | +- `tfMinTFDuration`: optional, defaults to `5000` |
| 111 | + |
| 112 | + - Type: integer |
| 113 | + - Multivalued: False |
| 114 | + |
| 115 | + |
| 116 | + > Minimum duration of a TimeFrame in milliseconds, only applies when `useStitcher=true`. |
| 117 | +- `tfAllowOverlap`: optional, defaults to `false` |
| 118 | + |
| 119 | + - Type: boolean |
| 120 | + - Multivalued: False |
| 121 | + - Choices: **_`false`_**, `true` |
| 122 | + |
| 123 | + |
| 124 | + > Allow overlapping time frames, only applies when `useStitcher=true` |
| 125 | +- `tfDynamicSceneLabels`: optional, defaults to `['credit', 'credits']` |
| 126 | + |
| 127 | + - Type: string |
| 128 | + - Multivalued: True |
| 129 | + |
| 130 | + |
| 131 | + > Labels that are considered dynamic scenes. For dynamic scenes, TimeFrame annotations contains multiple representative points to follow any changes in the scene. Only applies when `useStitcher=true` |
| 132 | +- `tfLabelMap`: optional, defaults to `[]` |
| 133 | + |
| 134 | + - Type: map |
| 135 | + - Multivalued: True |
| 136 | + |
| 137 | + |
| 138 | + > (See also `tfLabelMapPreset`, set `tfLabelMapPreset=nopreset` to make sure that a preset does not override `tfLabelMap` when using this) Mapping of a label in the input TimePoint annotations to a new label of the stitched TimeFrame annotations. Must be formatted as IN_LABEL:OUT_LABEL (with a colon). To pass multiple mappings, use this parameter multiple times. When two+ TP labels are mapped to a TF label, it essentially works as a "binning" operation. If no mapping is used, all the input labels are passed-through, meaning no change in both TP & TF labelsets. However, when at least one label is mapped, all the other "unset" labels are mapped to the negative label (`-`) and if `-` does not exist in the TF labelset, it is added automatically. Only applies when `useStitcher=true`. |
| 139 | +- `tfLabelMapPreset`: optional, defaults to `relaxed` |
| 140 | + |
| 141 | + - Type: string |
| 142 | + - Multivalued: False |
| 143 | + - Choices: `noprebin`, `nomap`, `strict`, `simpler`, `simple`, **_`relaxed`_**, `binary-bars`, `binary-slate`, `binary-chyron-strict`, `binary-chyron-relaxed`, `binary-credits`, `collapse-close`, `collapse-close-reduce-difficulty`, `collapse-close-bin-lower-thirds`, `ignore-difficulties`, `nopreset` |
| 144 | + |
| 145 | + |
| 146 | + > (See also `tfLabelMap`) Preset alias of a label mapping. If not `nopreset`, this parameter will override the `tfLabelMap` parameter. Available presets are:<br/>- `noprebin`: []<br/>- `nomap`: []<br/>- `strict`: ['`B`:`Bars`', '`S`:`Slate`', '`IN`:`Chyron-person`', '`CR`:`Credits`', '`M`:`Main`', '`O`:`Opening`', '`W`:`Opening`', '`Y`:`Chyron-other`', '`KU`:`Chyron-other`', '`L`:`Other-text`', '`G`:`Other-text`', '`F`:`Other-text`', '`E`:`Other-text`', '`T`:`Other-text`', '`P`:`-`', '`-`:`-`']<br/>- `simpler`: ['`B`:`Bars`', '`S`:`Slate`', '`IN`:`Chyron`', '`CR`:`Credits`', '`P`:`Neg`', '`-`:`Neg`']<br/>- `simple`: ['`B`:`Bars`', '`S`:`Slate`', '`IN`:`Chyron-person`', '`CR`:`Credits`', '`Y`:`Other-text`', '`KU`:`Other-text`', '`M`:`Other-text`', '`F`:`Other-text`', '`E`:`Other-text`', '`GLOTW`:`Other-text`', '`P`:`Neg`', '`-`:`Neg`']<br/>- `relaxed`: ['`B`:`Bars`', '`S`:`Slate`', '`Y`:`Chyron`', '`KU`:`Chyron`', '`IN`:`Chyron`', '`CR`:`Credits`', '`M`:`Other-text`', '`F`:`Other-text`', '`E`:`Other-text`', '`GLOTW`:`Other-text`', '`P`:`Neg`', '`-`:`Neg`']<br/>- `binary-bars`: ['`B`:`Bars`']<br/>- `binary-slate`: ['`S`:`Slate`']<br/>- `binary-chyron-strict`: ['`IN`:`Chyron-person`']<br/>- `binary-chyron-relaxed`: ['`Y`:`Chyron`', '`KU`:`Chyron`', '`IN`:`Chyron`']<br/>- `binary-credits`: ['`CR`:`Credits`']<br/><br/> Only applies when `useStitcher=true`. |
| 147 | +- `pretty`: optional, defaults to `false` |
| 148 | + |
| 149 | + - Type: boolean |
| 150 | + - Multivalued: False |
| 151 | + - Choices: **_`false`_**, `true` |
| 152 | + |
| 153 | + |
| 154 | + > The JSON body of the HTTP response will be re-formatted with 2-space indentation |
| 155 | +- `runningTime`: optional, defaults to `true` |
| 156 | + |
| 157 | + - Type: boolean |
| 158 | + - Multivalued: False |
| 159 | + - Choices: `false`, **_`true`_** |
| 160 | + |
| 161 | + |
| 162 | + > The running time of the app will be recorded in the view metadata |
| 163 | +- `hwFetch`: optional, defaults to `false` |
| 164 | + |
| 165 | + - Type: boolean |
| 166 | + - Multivalued: False |
| 167 | + - Choices: **_`false`_**, `true` |
| 168 | + |
| 169 | + |
| 170 | + > The hardware information (architecture, GPU and vRAM) will be recorded in the view metadata |
| 171 | +
|
| 172 | + |
| 173 | +#### Outputs |
| 174 | +(**Note**: "*" as a property value means that the property is required but can be any value.) |
| 175 | + |
| 176 | +(**Note**: Not all output annotations are always generated.) |
| 177 | + |
| 178 | +- [http://mmif.clams.ai/vocabulary/TimeFrame/v6](http://mmif.clams.ai/vocabulary/TimeFrame/v6) |
| 179 | + - _timeUnit_ = "milliseconds" |
| 180 | + |
| 181 | +- [http://mmif.clams.ai/vocabulary/TimePoint/v5](http://mmif.clams.ai/vocabulary/TimePoint/v5) |
| 182 | + - _timeUnit_ = "milliseconds" |
| 183 | + - _labelset_ = a list of ["B", "CR", "E", "F", "GLOTW", "IN", "KU", "M", "P", "S", "Y"] |
| 184 | + |
0 commit comments