Time Series Database
A Time Series database TSDB object () is database item that is used for storing scalar time varying data (TVD) that comes from external data sources.
Time Series Database objects cannot be used in simulations in conjunction with Events. However events can be imported into scalar time series databases.
It is not possible to use "access" as an external data source when carrying out automatic simulations using the Innovyze Live Server due to incompatibility of JET databases with x64 applications.
A TSDB consists of a number of streams and related external data sources. A stream defines a single series of time varying data and its properties.
The Observed and Forecast pages contain information about observed and forecast streams respectively. The Data Sources page contains details of the corresponding external sources while the Lookups page is used to map alphanumeric text, in the imported time varying data, to numerical values which are recognised by InfoWorks ICM.
There are two types of stream: observed and forecast. Observed streams record real world measured values whereas forecast streams record predicted values.
The following stream properties apply to both observed and forecast streams unless otherwise stated:
Field | Description |
---|---|
Stream name |
The data stream name. When using Simple CSV format as the data source type, the data stream name will be used to identify the file containing the live data feed by default. When using FW Format1 (FloodWorks Data Transfer File Format 1) as the data source type, ensure that each individual data stream in the multiple stream FloodWorks file is assigned to a single stream name in InfoWorks ICM. |
Units type* |
The type of data measurement. |
Data interval* |
The expected maximum time interval between successive values. This is used to determine if values are missing at any given time. If each value is valid indefinitely (until superseded by a later value) then set this to 0. |
Value interpolation |
The value interpolation method defines how values are interpolated when needed at times in-between non-missing values. The options are:
Note that the engine will always use Extend for Rainfall, even if the method has been set to Linear, and will always use Linear for other items e.g. flow, level. The Value interpolation only applies if interpolation is needed when generating the data sent to the engine, for example when you have two flow profiles to go into the event file, but these have different time steps and so must be interpolated to a common time step. When using a data stream to update the state of an existing model, the Value interpolation field must be set to Linear. |
External data source |
Name of the external data source as defined on the Data Sources page. Note
PI, PI WebAPI, EA RestAPI, iHistorian and ClearSCADA data sources cannot be selected as data sources for Forecast streams. An FW Format1 data source does not contain any information in the file for a forecast origin, therefore the first time in each block of data is used as the origin. |
Enable this option to prevent the stream from being updated from external data sources. This applies to both manual and automatic updates. See Updating Time Series Data for more information. |
|
Latest update |
Time of the most recent update from an external data source. |
Latest data |
Time of the most recent data in the time series database. |
Records |
Number of time values in an observed stream or number of forecast origins in a forecast stream. |
External units |
The units for the telemetry feed. |
Lookup name |
The name of the lookup table, as defined on the Lookups page. |
Value offset |
Offset value to be added to the data in the file. |
Value factor |
Multiplication factor to be applied to the data in the file. |
Time Offset |
Time offset applied to all the time series database time values sent to the engine. The use of a time offset is mainly due to the way rainfall is recorded and how the InfoWorks ICM engine interprets rainfall. If there is a 15 minute rain data and a value at time T, this usually represents the rain that fell over the 15 minutes up to time T. This is the opposite in the engine, which applies rainfall at time T over the following 15 minutes. |
When importing (updating from external data source) any values below this threshold in external units will be marked as excluded. Also a message with the original value will be written into the Flag field of the Time Series Database Grid. |
|
Max. threshold |
When importing (updating from external data source) any values above this threshold in external units will be marked as excluded. Also a message with the original value will be written into the Flag field of the Time Series Database Grid. |
Table |
Name of the table in the telemetry database which contains the live data feed. When using Simple CSV format as the data source type, this field can be used to specify the name of the file containing the live data feed if the file name does not match the Data Stream Name. When using SANDRE (XML) format as the data source type, this field should be used to specify the XML filename (without path) to be processed. A wild card can be used in the filename e.g. prev_anduze_parfaite_*.xml so that times encoded in the filename do not prevent loading. When using SOPHIE (Pre) format as the data source type, this field should be used to specify the name and location of the file containing the data. When using PI WebAPI or EA RestAPI as the data source type, this is used to set which PI/EA "point" (or tag on a PI database) to get data from. The dropdown list will contain the first 2000 names from the server, however, if the required point is not included in the list then you can type it in. When using the ADS Telemetry as the data source type, this field contains a list of locations retrieved from the ADS Telemetry web address (specified in the Server and Database fields), from which the applicable location can be selected. |
Data column |
Column in the table (specified in the Table field), that contains the monitored data values. When using SOPHIE (Pre) format as the data source type, this field should be used to specify the station ID. When using the ADS Telemetry as the data source type, this field will contain a list of entities retrieved from the ADS Telemetry web address (specified in the Server and Database fields), from which the applicable entity can be selected. |
Time column |
Column in the table (specified in the Table field), that contains the timestamp data. |
Origin time column |
Applies to forecast streams. Column in the table (specified in the Table field), that contains the origin time data. |
User field 1 to 3 |
When using SANDRE (XML) format as the data source type, User field 1 should be used to specify the filename of the transformation template used to extract the required data types (xsl format). When using Jet, Oracle, SQL Server or ODBC data source types, the User field and User value fields are used to determine which values to get if the telemetry database contains data values for more than one stream. |
User value 1 to 3 |
When using SANDRE (XML) format as the data source type, User value 1 should be used to specify the data type e.g. Q (found in tag "GrdSerie"), and the station tag name e.g. B23600100101 (found in tag "CdCapteur") should be specified in User value 2. When using Jet, Oracle, SQL Server or ODBC data source types, the User field and User value fields are used to determine which values to get if the telemetry database contains data values for more than one stream. (See User field 1 above for example.) |
Tag name |
Applicable to connections to PI, iHistorian and ClearSCADA databases. Name of the tag in the database which contains the live data feed. A tag is a unique identifier for a data stream (or point). Note: PI Databases
ICMLive requires the classic PI OLEDB provider and not the Enterprise OLEDB provider. Both providers can be installed side by side and do not interfere with each other. However, some of the functionality required by ICMLive is only available through the classic provider. |
Description | A description of the TSDB. This field could, for example, describe how the data streams were set up. |
* indicates fields that are set at stream creation time and that cannot be modified afterwards.
Observed and forecast data stream configurations can be exported from the TSDB to a CSV file. The exported CSV file will contain all the data streams defined on the observed or forecast grid as well as the header row. If no streams are defined, only the header row will be exported.
To export data streams:
- Right-click on the header row on the Observed or Forecast tab.
- Select the Export to csv file option from the context menu.
A standard Windows Save As window is displayed. In this window:
- Set the location of the folder where the csv file is to be stored.
- Enter the name that you want to use for the exported file in the File name field.
- Click on the Save button.
A CSV file containing the exported data stream configurations is saved to the applicable folder.
The configuration information for observed and forecast data streams can be imported from a CSV file to a TSDB. When a CSV file is selected for import, the Import Stream Configuration dialog is displayed which allows you to map columns in the CSV file to TSDB data fields.
When configuration data is imported into the TSDB, the TSDB is automatically saved.
If you want to import configuration data from a large csv file into an existing TSDB, it is advisable to import it first into a new, empty TSDB (which can subsequently be deleted), test that it has imported as expected, before importing it to the existing TSDB.
To import configuration information for data streams into a TSDB:
- Right-click on the header row on the Observed or Forecast tab.
- Select the Import to csv file option from the context menu.
A standard Windows Open window is displayed. In this window:
- Find the location of the folder where the csv file is stored.
- Select the csv file to be imported. The selected file name will appear in the File name field.
- Click on the Open button.
The Import Stream Configuration dialog is displayed:
- If there is an existing configuration file (which contains the mappings between the csv data and the TSDB fields) that you want to use:
- Click on the Load config button, and a Windows Open window is displayed. In this window:
- Locate the cfg file you want to use. The selected file name will appear in the File name field.
- Click on the OK button. The mappings between the TSDB fields and the CSV columns will be made according to the settings in the selected configuration file. Any mapping can be changed if required by selecting a different option from the dropdown list in the appropriate field in CSV column. Note that the mappings to the TSDB field's TSDB Latest update, Latest data, Records and Original time are read-only and cannot be changed.
If you do not want to use an existing configuration file or none exist:
- Ensure that the Input file has header row box is checked if you want the first row in the csv file to appear as entries in the CSV column and as options in the adjacent dropdown lists. If the box is unchecked, column numbers will be used instead.
- Make any changes to the mapping between the items in the CSV column and the fields in the TSDB data. Note that the mappings to the TSDB field's TSDB Latest update, Latest data, Records and Original time are read-only and cannot be changed. If many of the current mappings are incorrect, you can use the Clear grid button to clear all the entries in the CSV column, and then select the appropriate items from the dropdown lists in each field in the CSV column.
- Use the Test import button to check which items would be imported and to highlight any problems.
- If you want to save the current mappings to a configuration file, click on the Save config button. A Windows Save As window is displayed into which you can specify the name for the configuration file, and click on the Save button. The Save As window closes and the file is saved with the appropriate name and a cfg file extension.
- Select OK to import the configuration data.
- If there is an existing configuration file (which contains the mappings between the csv data and the TSDB fields) that you want to use:
The imported data will be added at the end of any existing rows on the appropriate Observed or Forecast page, and the TSDB object is automatically saved.
Additional options are available via context menu in the grid rows:
Option |
Description |
---|---|
Cut |
Disabled when a full row (data stream) is selected. Cut the currently selected cell value. |
Copy |
Copy the currently selected data stream / cell value. |
Paste |
Paste the copied data stream / cell value in the currently selected row / cell. |
Delete Row |
Delete currently selected data stream. |
Show time series data |
Displays the Time Series Database Grid for the selected data stream. |
Test connection |
Test the connection to the data stream. |
Update data |
Available when external updates are enabled for the selected data stream. Click on this button to manually update the currently selected data stream. The Update Time Series Data Dialog, where configuration of the update takes place, gets displayed. See Updating Time Series Data for more information. |
Data sources are required to connect to external systems to get the time series data for the streams. For example the folder name where RADAR forecast data is stored or the connection string required for a RDBMS such as Oracle.
Data sources are updated sequentially, so if the update of one data source is dependent on the update of another then it should be put before that data source in the list on the Data sources tab. This may occur, for example, if the scripts run third party models where the results of one model feed into another.
Field | Description | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Data Source Name |
User defined name for the connection. |
||||||||||||||
Type |
Dropdown list of supported database types:
IMPORTANT NOTE
It is not possible to use "access" as an external data source when carrying out automatic simulations using the Innovyze Live Server due to incompatibility of JET databases with x64 applications. Note
ICMLive requires the classic PI OLEDB provider and not the Enterprise OLEDB provider. Both providers can be installed side by side and do not interfere with each other. However, some of the functionality required by ICMLive is only available through the classic provider. |
||||||||||||||
Provider |
Enabled when Type is set to SQL Server. Available options are:
|
||||||||||||||
Net service name |
Enabled when Type is set to Oracle. The name that was used to identify this particular Oracle database instance on your PC. |
||||||||||||||
Enabled when Type is set to SQL Server, PI, PI WebAPI, iHistorian, ClearSCADA, EA RestAPI or ADS Telemetry. Name of the server on which the telemetry database is stored. For PI WebAPI and EA RestAPI, the connection is made using HTTP/TLS which defaults to port 443. If the server is configured to use a different port then ensure that this is entered as part of the server name e.g. "devdata.Innovyze.com:444". For ADS Telemetry, this is the first part of the web address e.g. https://api.adsprism.com |
|||||||||||||||
Enabled when Type is set to SQL Server , PI WebAPI or ADS Telemetry. For SQL Server, this is the name of the telemetry database. For PI WebAPI, this is the name of the PI data server. For ADS Telemetry, this is the remainder of the web address. The specified Server and Database should provide the applicable ADS Telemetry URL. |
|||||||||||||||
Logon type |
Enabled when Type is set to SQL Server, Oracle, PI, PI WebAPI, iHistorian or ClearSCADA. Logon type can be:
|
||||||||||||||
Username |
Enabled when Type is set to SQL Server, Oracle, PI, PI WebAPI, iHistorian or ClearSCADA. For database types other than Oracle, this field is only available when Logon Type is set to Username / Password. Username to log on to database. |
||||||||||||||
Password |
Enabled when Type is set to SQL Server, Oracle, PI, PI WebAPI, iHistorian, ClearSCADA or ADS Telemetry. For database types other than Oracle and ADS Telemetry, this field is only available when Logon Type is set to Username / Password. The password to log on to database. For ADS Telemetry, this is the user key. The password or user key will be displayed as asterisks in the grid view. |
||||||||||||||
Command timeout |
Timeout, in seconds, for obtaining values from the data feeds. |
||||||||||||||
Schema |
Enabled when Type is set to SQL Server, Oracle or ODBC. The name of the user on the SQL Server, Oracle or ODBC database server that owns the database schema. |
||||||||||||||
Filename / Folder |
Enabled when Type is set to JET, Simple CSV, Batch CSV, FW Format1, SOPHIE (Pre) or SANDRE (XML). Path of the telemetry database when using a JET database. Path of the folder containing data stream .txt files when using Simple CSV format. Path of the folder containing the comma separated variable formatted files when using Batch CSV. Path of the folder containing the datastream .dat files to be loaded when using FW Format1. Path of the folder containing data stream .pre files when using SOPHIE (Pre) format. Path of the folder containing data stream .xml and xsl files when using SANDRE (XML) format. |
||||||||||||||
Connection String |
Enabled when Type is set to ODBC, ClearSCADA or PI WebAPI. String containing parameters used to connect to the data source. For PI WebAPI, the only string that can be specified is no_cert (case insensitive). If this is used then the certificates on the server will not be validated. However, this is not recommended and should only be used if a server has been setup correctly or the certificates have expired and the user is prepared to accept the risk. |
||||||||||||||
Time Zone |
Dropdown list with all the available time zones. Allows users to select the time zone of the data. |
||||||||||||||
Last Time Series Data Update (UTC) |
Read-only. UTC time that the data source was last updated. This field is automatically revised by InfoWorks ICM whenever an automatic or a manual update of the data source is carried out. See Updating Time Series Data for further details. |
||||||||||||||
Disabled in InfoWorks ICM. |
|||||||||||||||
Automatic Update Start At (min) |
Disabled in InfoWorks ICM. |
||||||||||||||
Automatic Update Interval (min) |
Disabled in InfoWorks ICM. |
||||||||||||||
Automatic Update Trigger File |
Disabled in InfoWorks ICM. |
||||||||||||||
Script |
Absolute path and name of the script file, plus any script parameters, which will be run at the start of the data update process. The following script parameters may be specified:
Parameters should be passed to the script as comma separated variables with or without quotation marks, however, quotation marks are required when there is a comma or quotation mark in the particular input parameter. Parameters values may be specified for testing purposes. However, these values will not be used if the TSDB update occurs via the sim pre-processor. Note
All script parameters, except $datetime_format, will be populated by InfoWorks ICM if the TSDB data update occurs via the sim pre-processor. They will not be populated by the Data Loader, nor using the Update data option from context menu (displayed by right-clicking on an entry in the Data sources tab). If the Data Loader is being used but the parameters are also required to be passed to the script, then ensure that the Automatic Update Disabled box is checked to prevent the Data Loader from performing the update. In this case the sim pre-processor will perform the update before the simulation is launched. |
||||||||||||||
Script timeout (s) |
Interval of time after which the script is deemed to have failed. |
Additional options are available via context-menu:
Option |
Description |
---|---|
Cut |
Disabled when a full row (data source) is selected. Cut the currently selected cell value. |
Copy |
Copy the currently selected data source / cell value. |
Paste |
Paste the copied data source / cell value in the currently selected row / cell. |
Delete row |
Delete currently selected data source. |
Update data |
Click on this button to manually update the currently selected data source. The Update Time Series Data Dialog, where configuration of the update takes place, gets displayed. See Updating Time Series Data for more information. |
The Simple CSV format used as a data source in the TSDB is an ASCII text file containing one data stream per file. Each line of the file contains a date, a time and a value for the data stream. All the lines in the file must be in chronological order.
Each line of the file has the following format:
dd/mm/yyyy hh:nn,value
where yyyy is the four-digit year, mm is the 2-digit month, dd is the 2-digit day, hh is the 2-digit hour, nn is the 2-digit minute and value is the data value. The date is separated from the time by a single space, and the time is separated from the data value by a non-numeric character (a comma in the example above).
When updating forecast data, the first time in the file is used as the forecast origin.
The following is an example of a valid data file:
12/02/2012 09:00,0.481
12/02/2012 09:15,0.483
12/02/2012 09:30,0.483
12/02/2012 09:45,0.485
12/02/2012 10:00,0.484
12/02/2012 10:15,0.483
12/02/2012 10:30,0.479
12/02/2012 10:45,0.479
12/02/2012 11:00,0.478
12/02/2012 11:15,0.477
12/02/2012 11:30,0.477
12/02/2012 11:45,0.475
12/02/2012 12:00,0.474
12/02/2012 12:15,0.474
When setting up the data source details in the TSDB, the path of the folder in which the files with the data stream feeds are located is specified in the Filename/Foldername field of the Data Sources grid. This allows one data source to be specified for a set of streams.
By default, when updating data streams from external data sources, InfoWorks looks for a file in the Data Source Filename/Foldername field called <Stream Name>.txt. However, this behaviour can be overridden by specifying a filename, including the filename extension, in the Table field of the Observed grid for the stream. (InfoWorks will still look in the same folder defined in the data source for the specified file.)
Update from CSV - file locations
When a data stream is updated from a CSV data source, successfully loaded entries are moved to a file in a 'loaded' subdirectory (relative to the CSV data file), and failed entries to a file in a 'failed' subdirectory.
The files in the loaded and failed directories are named according to the following scheme:
<csv_file_name>_<current_date_time>UTC.<csv_file_extension>
where:
<csv_file_name> - name of the CSV telemetry file
<csv_file_extension> - extension of the CSV telemetry file
<current_date_time> - the current date and time in UTC, using format YYYYMMDDhhmmss
A batch CSV format file, used as a data source in the TSDB, is an ASCII text file which can contain multiple data streams per file. See Batch file format for a description of the format.
Batch data loading process
A batch CSV data source loads data from all CSV-formatted files that have names that match the pattern specified in the Filename/Foldername field of the Data Sources tab. The pattern must include the full path to the folder containing the data files and can include wildcard characters in the filename only. Examples of valid patterns that can be specified in the Filename/Foldername field are:
- D:\Data\CSV\*.csv
- \\server\share\mycsvdata??????.txt
When an update is initiated (either manually or automatically) it loads all data from all files that match the pattern, then moves those files to an archive subfolder of the folder containing the files. Data is loaded for all times and all streams (except streams for which data loading is disabled) regardless of whether only an individual stream has been selected for update or a time range has been specified. The data loading process requires write access to the folder containing the files (in order to move them to the archive folder, which it creates if needed).
If a file is locked, for example, because an external process is still writing data to the file, it is skipped and will be loaded the next time an update is performed. Otherwise, the data is loaded and any errors, warnings and other information about the processing of the file are written to a daily log file, located in the same folder as the data file, which has a name of the form:
YYYY-MM-DD.log
The log file contains one line per log entry, each stamped with the time of entry (in local time). Entries indicate the start and end of processing for each update and each individual file, the occurrence of errors in data files (including file name and line number) and summaries of the number of valid and invalid lines found in each file.
Batch updates are intended primarily for automatic use in Innovyze Live products where the source system periodically writes data files (typically named using a time stamp or other unique identifier) and the data loader loads them automatically at regular intervals (for prompt data loading, a one minute interval is suggested). Updates can also be initiated manually, however, the update will load all available data, not just for the specified data stream or time range.
Because the streams are named within the file itself, it is not strictly necessary to specify (in the Stream name field in the Observed or Forecast pages) the name of the data source for each stream that will be loaded from that data source. However, it is useful to do so for at least one such stream, as this enables use of the Validate option on the context menu for that stream in the grid. Validation will process all matching files exactly as for data loading, with errors and warnings written to the log file, except that no data is actually loaded and the files are not moved to the archive folder. If any invalid lines are found, or no data file can be found that matches the pattern, the validation fails and the log file should be inspected for details of the error(s).
Note that a manually-initiated update does not report any errors directly to the user – it merely indicates when the update has completed. To check for errors it is necessary to inspect the log file.
File format
The format of files for Batch CSV is intended to be flexible. In addition to the default comma-separated columns which comprise a batch csv file, three additional, optional columns are also available.
Default file format
As a minimum, the file must contain comma-separated columns for the stream name, date-time and value, in that order. For example:
SO99007400_ActualRainFall,2015-09-04T08:00:00+00:00,0.2
There are no column headings and the lines in the file may appear in any order. Blank lines and white space in any column are ignored.
The stream name on each line must match the name of an existing stream. If not, an error will be logged for that line.
Date-time must be specified as an ISO 8601 extended format date-time string in which the seconds and milliseconds elements are optional. The T separator between date and time may be replaced by a space. If the string ends either with Z (i.e. UTC time) or with a time-zone specifier (e.g. +11:00) then the time is loaded exactly as specified and any Time Zone specified in the Data Sources grid is ignored; otherwise the time is treated as local time in the specified Time Zone (but note there is no adjustment for daylight saving). Examples of valid date-times are:
- 2015-01-14T22:07:14.314Z
- 2015-01-14 22:07:14.3Z
- 2015-01-14T22:07:14
- 2015-01-14 22:07
- 2015-01-14 22:07:14+00:00
- 2015-01-14 18:07:14-04:00
If the value column on any given line is blank, an explicit missing value is loaded at the specified date-time. Otherwise the value is read as a floating-point number.
Optional Columns
The following optional comma-separated columns can also be included in a batch CSV file:
- A fourth column that specifies the unit of measure. This is used solely as a check against the External Units specified for the relevant stream (if these do not match, the line is not loaded and an error message is written to the log file).
- A fifth column that specifies whether the value should be marked as invalid (i.e. not to be used for modelling). If the entry in the fifth column is any of 1, T, TRUE, or Y, then the value is loaded but marked as invalid; otherwise it is marked as valid.
- A sixth column that, for forecast data, specifies the date-time of the origin of the forecast. The date-time format is as above. If this column contains a date-time, but the named stream is not a forecast stream, an error is logged and the value is not loaded. Conversely, if the column is blank or missing, but the named stream is a forecast stream, an error is logged.
An example of a valid file is:
SO99007400_ActualRainFall, 2015-09-04T08:45:00+00:00,0.0, mm/hr ,false
SO99007400_ActualRainFall,2015-09-04T08:00:00+00:00,0.2
SO99007400_ActualRainFall, 2015-09-04T08:30:00+00:00,0.2, mm/hr
SO99007400_ActualRainFall,2015-09-04T09:00:00+00:00,1.4,mm/hr,false
SO99007400_ActualRainFall,2015-09-04T07:45:00+00:00,99,mm/hr,true
SO99007400_ActualRainFall,2015-09-04T08:15:00+00:00,,mm/hr,false
XYZ154_Forecast_Level, 2015-09-04T08:45:00+00:00, 12.4, m, false, 2015-09-04T07:45:00+00:00
XYZ154_Forecast_Level, 2015-09-04T08:00:00+00:00, 12.7, m, false, 2015-09-04T07:45:00+00:00
XYZ154_Forecast_Level, 2015-09-04T08:30:00+00:00, 12.9, m, false, 2015-09-04T07:45:00+00:00
XYZ154_Forecast_Level, 2015-09-04T09:00:00+00:00, 12.6, m, false, 2015-09-04T07:45:00+00:00
XYZ154_Forecast_Level, 2015-09-04T07:45:00+00:00, 12.6, m, false, 2015-09-04T07:45:00+00:00
XYZ154_Forecast_Level, 2015-09-04T08:15:00+00:00, 12.2, m, false, 2015-09-04T07:45:00+00:00
The FW Format 1 file is an ASCII text file containing multiple data streams per file. In InfoWorks ICM, each data stream in the ASCII file must be assigned to a Stream name in Observed or Forecast pages. For example, the following FloodWorks File Format 1 file contains two datastreams called 'N G21427R::RAIN' and 'N G43012L::LEVEL':
V 1
N G21427R::RAIN
D 2001 03 02 16 15 0.0 0 D
D 2001 03 02 16 30 0.4 0 D
N G43012L::LEVEL
D 2001 03 02 11 00 2.471 0 D
D 2001 03 02 11 15 2.493 0 D
D 2001 03 02 13 30 2.715 0 D
V 1
In the TSDB, the names of the individual data streams in the file would be entered on separate rows in the Stream name column as N G21427R::RAIN and N G43012L::LEVEL.
When setting up the data source details in the TSDB, the path of the folder in which the files with the data stream feeds are located is specified in the Filename/Foldername field of the Data Sources grid. This allows one data source to be specified for a set of streams.
The time interval limitation of 15 minutes and 15-minute boundaries (e.g. 1000, 1015, 1030, etc.) in the FW Format 1 file do not apply in InfoWorks ICM; all times specified in the datastream will be included in the TSDB.
Only FloodWorks data quality flags with a value of 4 are excluded when processed by InfoWorks ICM; all other flagged data is loaded as valid irrespective of any flag settings in the FW Format 1 file.
Control files are not required as input for the TSDB.
Refer to the FloodWorks Help for more detailed information about the format of the Data Transfer File Format 1 files.
Some imported time varying data is received in alphanumeric text format, and in order for this to be recognised by InfoWorks ICM, it must be converted into numerical values. This conversion is carried out according to the mapping defined in a lookup table, which can subsequently be assigned to a data stream in the Observed and Forecast pages.
Field | Description |
---|---|
Lookup Name |
User defined name for the lookup conversion table. |
Mapping |
Click on the button to display the Live Data Lookup grid editor, which is used to map alphanumeric text to numerical values.
|
Working with TSDBs, opening or deleting historical versions of TSDBs
See the Time Series Database Objects topic for further details.
Importing events into a scalar TSDB
See the Importing Event Data into Time Series Databases topic for more information.
Initial conditions of events imported into TSDBs can be found in Catchment Initial Conditions objects.
Viewing and editing time series data points
See the Time Series Database Objects topic for further details.
Updating scalar time series data from data source
Scalar time series data may be manually (on demand update) updated from external data sources. Please refer to the Updating Time Series Data topic for further details.