Skip to content

Conversation

@timbeccue
Copy link
Contributor

@timbeccue timbeccue commented Aug 22, 2025

Changes in service of #422

This PR introduces the local setup we'll use for site deployments of banzai, sans calibration caching which will be added in a subsequent PR.

Local Banzai Notes

To run:

docker compose -f docker-compose-site.yml --env-file .site-banzai-env up -d --build

This requires an env file called .site-banzai-env that should look like this:

# .site-banzai-env

# Database Configuration
DB_ADDRESS=sqlite:////data/banzai.db    # Path for the docker container, not the host.
CAL_DB_ADDRESS=""                       # This should be the address to the AWS banzai database where we get calibrations
SITE_ID=

# API Configuration
API_ROOT=https://archive-api.lco.global/
AUTH_TOKEN=

# Data Paths
HOST_DATA_DIR=./site_banzai # this maps to /data in the container, and should contain unprocessed data in a subdirectory `raw`
HOST_PROCESSED_DIR=./site_banzai/output # path where processed data will be saved on the host

# Container Networking
FITS_BROKER=rabbitmq
FITS_BROKER_URL=amqp://rabbitmq:5672
FITS_EXCHANGE=fits_files
TASK_HOST=redis://redis:6379/0

# Celery Configuration
CELERY_TASK_QUEUE_NAME=e2e_task_queue
CELERY_LARGE_TASK_QUEUE_NAME=e2e_large_task_queue

# Worker Configuration
BANZAI_WORKER_LOGLEVEL=debug
OMP_NUM_THREADS=2
OPENTSDB_PYTHON_METRICS_TEST_MODE=1

In order to send images to be processed, run:

python queue_images.py <host_data_dir>/raw

The data to be processed should be in the directory ${HOST_DATA_DIR}/raw. The output will be saved in ./${HOST_PROCESSED_DIR}.

@timbeccue timbeccue linked an issue Aug 22, 2025 that may be closed by this pull request
6 tasks
@timbeccue
Copy link
Contributor Author

Is the prior docker-compose.yml file important to preserve? If not, I'll replace it with my version, currently docker-compose.local.yml

- Remove outdated docker-compose.yml
- Rename docker-compose.local.yml to docker-compose-site.yml
- Rename default local banzai directory from local_banzai to site_banzai
- Rename default db name from local-banzai.db to site-banzai.db
- Added line to cache sync daemon to create calibrations_cache directory if needed
@timbeccue timbeccue marked this pull request as ready for review October 28, 2025 23:56
args_dict = args

# If a separate calibration db address is not provided, fall back to using the primary db address
if 'cal_db_address' not in args_dict or args_dict.get('cal_db_address') is None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels like it should be in the main.py parse args code rather than the context. The context stuff doesn't really care what we put into the object itself.

default='sqlite:///banzai-test.db',
help='Database address: Should be in SQLAlchemy form')
parser.add_argument('--calibration-db-address', dest='cal_db_address',
help='Optional separate database address for getting calibration files. Defaults to using the same address as --db-address.')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where the cal-db-address should be set. default the arg to None and then check for None in this function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried your suggestion of removing the cal_db_address fallback from context.py but that caused issues in the e2e tests because some parts of the code (e2e tests and celery workers) don’t use parse_args to set up the context, and were therefore failing to set the cal_db_address fallback to db_address.

Seems like the solution is either

  1. add the init logic back to context.py
  2. use access patterns like getattr(runtime_context, ‘cal_db_address’, runtime_context.db_address) everywhere cal_db_address is used (messy)
  3. require setting cal_db_address explicitly (might break existing banzai setups)
  4. get rid of the separate cal_db_address entirely

Any of these fixes is relatively easy to implement but option 4 seems best from an overall complexity standpoint if we are ok with the docker-compose-local.yml setup requiring users to set up their own db from scratch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've gone ahead with option 1 (adding init logic back to context.py) just to get tests working again

raise FrameNotAvailableError(f"Frame {frame_id} not found in archive")

# Check if 'url' field exists in the response
if 'url' not in response_data:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the error trying to check? This feels too specific and like you are trying to solve something else here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I encountered some files that were missing the s3 url. Something from when I was working on the old cache setup a while back. Unfortunately I'm fuzzy on the details, but here's an example log with a file that prompted this:

2025-10-22 17:36:36.624 | 2025-10-22 21:36:36.620    ERROR:            sync: Unexpected error downloading lsc0m412-sq35-20240315-bpm-central30x30.fits.fz (frameid: 69366609, type: BPM): 'url' | {"processName": "MainProcess"}
...
2025-10-22 17:36:36.624 |     bytes = buffer.write(requests.get(response.json()['url'], stream=True, timeout=60).content)
2025-10-22 17:36:36.624 | KeyError: 'url'

I don't think the code is hitting this file anymore, and I tried running without this block and there were no issues. So maybe best to remove it?

from kombu import Connection, Exchange


def post_to_processing_queue(filename, path, broker_url, exchange_name, **kwargs):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels redundant with the file utils function. What new things does this add?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This uses the file path rather than the frameid with the intended use case of wanting to process a file that exists on the local disk. Maybe it would be cleaner to modify the file utils function to accept a frameid or path?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am leaning towards keeping post_to_processing_queue separate from post_to_archive_queue in file utils because different intended workflows (local disk vs s3), and because this is a small function that's pretty limited in scope and maybe not worth setting up the abstraction elsewhere. Let me know if you disagree.

Copy link
Collaborator

@cmccully cmccully left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some recommendations for cleanup, but overall looks fine.

@cmccully
Copy link
Collaborator

You also need to add a change log message and update the pyproject version number.

  - Add changelog entry and bump version to 1.28.0
  - Rename .site-banzai-env to site-banzai-env (remove hidden prefix)
  - Change "Running at Site" to "Running Locally" in README
  - Use SQLAlchemy make_url() instead of string splitting in dbs.py
  - Add argparse to queue_images.py and use named FITS extension
  - Remove cal_db_address fallback from context.py (already in main.py)
  - Fix kernel name in example notebook
@timbeccue
Copy link
Contributor Author

Added the requested changes. Two of your comments I did not change but replied with details; let me know if there's anything else needed for those to be resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Initial setup for BANZAI-at-site

3 participants