List view
This milestone is dedicated to developing the engine responsible for executing the web crawlers. The main features include implementing functions for executing the crawler in general, from a specific node based on cached data, and a step-by-step execution mode that allows users to manually control the execution process. Additional features could include error handling for execution failures, a system for pausing and resuming execution, and performance optimizations to ensure efficient execution of the web crawlers.
No due date•2/4 issues closedThis milestone is about providing flexible and reliable data export options. The main features include functions for exporting data in various formats (JSON, CSV, database formats), a user interface for configuring data export options, and a system for scheduling data exports. Additional features include error handling for failed data exports and a system for saving and reusing commonly used data export configurations.
No due dateThis milestone aims to provide a robust set of tools for retrieving, manipulating and transforming data from the web. This includes implementing functions for data manipulation and transformation, creating a user interface for configuring these operations, and developing a system for previewing data transformations before applying them. Additional features include error handling for invalid data transformations and a system for saving and reusing commonly used data transformations.
No due date•3/9 issues closedThis milestone focuses on creating an intuitive and user-friendly interface for creating and editing web crawlers. The main features include drag-and-drop functionality for nodes, a user interface for editing node attributes, and logic for connecting and disconnecting nodes. Additional features include a system for saving and loading node configurations and error handling for invalid node connections.
No due date•10/15 issues closed