Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ Performs an SPI transaction. Chip-select should be held active between blocks.

Similar to `xfer2` but accepts arbitrary large lists. If list size exceeds
buffer size (which is read from `/sys/module/spidev/parameters/bufsiz`), data
will be split into smaller chunks and sent in multiple operations.
will be split into smaller 4096 byte chunks and sent in multiple operations.

close()

Expand Down
74 changes: 39 additions & 35 deletions spidev_module.c
Original file line number Diff line number Diff line change
Expand Up @@ -39,12 +39,12 @@
#define SPIDEV_MAXPATH 4096

#define BLOCK_SIZE_CONTROL_FILE "/sys/module/spidev/parameters/bufsiz"
// The xfwr3 function attempts to use large blocks if /sys/module/spidev/parameters/bufsiz setting allows it.
// The xfwr function attempts to use large blocks if /sys/module/spidev/parameters/bufsiz setting allows it.
// However where we cannot get a value from that file, we fall back to this safe default.
#define XFER3_DEFAULT_BLOCK_SIZE SPIDEV_MAXPATH
// Largest block size for xfer3 - even if /sys/module/spidev/parameters/bufsiz allows bigger
#define XFER_DEFAULT_BLOCK_SIZE SPIDEV_MAXPATH
// Largest block size for xfer - even if /sys/module/spidev/parameters/bufsiz allows bigger
// blocks, we won't go above this value. As I understand, DMA is not used for anything bigger so why bother.
#define XFER3_MAX_BLOCK_SIZE 65535
#define XFER_MAX_BLOCK_SIZE 65535


#if PY_MAJOR_VERSION < 3
Expand All @@ -60,38 +60,38 @@
#define PyInt_Type PyLong_Type
#endif

// Maximum block size for xfer3
// Initialised once by get_xfer3_block_size
uint32_t xfer3_block_size = 0;
// Maximum block size for xfer
// Initialised once by get_xfer_block_size
uint32_t xfer_block_size = 0;

// Read maximum block size from the /sys/module/spidev/parameters/bufsiz
// In case of any problems reading the number, we fall back to XFER3_DEFAULT_BLOCK_SIZE.
// If number is read ok but it exceeds the XFER3_MAX_BLOCK_SIZE, it will be capped to that value.
// In case of any problems reading the number, we fall back to XFER_DEFAULT_BLOCK_SIZE.
// If number is read ok but it exceeds the XFER_MAX_BLOCK_SIZE, it will be capped to that value.
// The value is read and cached on the first invocation. Following invocations just return the cached one.
uint32_t get_xfer3_block_size(void) {
uint32_t get_xfer_block_size(void) {
int value;

// If value was already initialised, just use it
if (xfer3_block_size != 0) {
return xfer3_block_size;
if (xfer_block_size != 0) {
return xfer_block_size;
}

// Start with the default
xfer3_block_size = XFER3_DEFAULT_BLOCK_SIZE;
xfer_block_size = XFER_DEFAULT_BLOCK_SIZE;

FILE *file = fopen(BLOCK_SIZE_CONTROL_FILE,"r");
if (file != NULL) {
if (fscanf(file, "%d", &value) == 1 && value > 0) {
if (value <= XFER3_MAX_BLOCK_SIZE) {
xfer3_block_size = value;
if (value <= XFER_MAX_BLOCK_SIZE) {
xfer_block_size = value;
} else {
xfer3_block_size = XFER3_MAX_BLOCK_SIZE;
xfer_block_size = XFER_MAX_BLOCK_SIZE;
}
}
fclose(file);
}

return xfer3_block_size;
return xfer_block_size;
}

PyDoc_STRVAR(SpiDev_module_doc,
Expand Down Expand Up @@ -173,12 +173,13 @@ PyDoc_STRVAR(SpiDev_write_doc,
static PyObject *
SpiDev_writebytes(SpiDevObject *self, PyObject *args)
{
int status;
uint16_t ii, len;
uint8_t buf[SPIDEV_MAXPATH];
PyObject *obj;
PyObject *seq;
char wrmsg_text[4096];
int status;
uint16_t ii, len;
PyObject *obj;
PyObject *seq;
const uint32_t spi_blk_maxsize = get_xfer_block_size();
uint8_t buf[spi_blk_maxsize];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like you are using a VLA (variable length array) here. While this might be supported and probably works, it is something that a lot of people think is bad practice for various reasons. I think one of the following options might be more suited:

  1. Use dynamic buffer allocation
  2. Use a static buffer with size XFER_MAX_BLOCK_SIZE as we don't need to care too much about saving memory

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right about VLAs—this does drop C89 support.

My goals were to:

  • avoid oversized fixed stack buffers while keeping portability,
  • avoid malloc/free in the I/O hot path (too slow on Raspberry Pi ↔ SoC transfers).

With a capped VLA:

  • stack use matches the actual need (typically <4KB, max 65KB),
  • growth is safely bounded,
  • no malloc overhead, best performance.

If strict C89 compliance is required, VLA needs to be addressed, but IIRC it's not possible to just request XFER_MAX_BLOCK_SIZE of stack on some devices.

Copy link
Collaborator

@fschrempf fschrempf Sep 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good points!

but IIRC it's not possible to just request XFER_MAX_BLOCK_SIZE of stack on some devices.

I can't really judge that. I thought it would be safe to allocate a 64 KiB array on the stack on any Linux system, but that might be wrong. If we think it is not safe to do so we should probably avoid this case altogether. No matter if it is caused by a fixed stack buffer or through a VLA that can potentially have the same size.

avoid malloc/free in the I/O hot path (too slow on Raspberry Pi ↔ SoC transfers).

Same here, I can't really judge if the malloc in the hot path would impact the performance in practice. The xfer functions already use dynamic buffers for RX and TX. But I agree that avoiding it would be nice.

What about using a buffer on the heap that is tied to the lifetime of the spidev Python object? This buffer could then later be used by all transfer functions (writebytes, xfer, etc.)? This would result in:

  • no malloc overhead in hot path
  • no risk of hitting any stack limits
  • no C89 compliance issues (if this is a problem at all)
  • no problems due to VLAs apparently being optional in C11 and therefore not having a guarantee that compilers will support them in the future

char wrmsg_text[spi_blk_maxsize];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does the message string buffer have the same size as the data buffer? This looks wrong.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes you are correct. Previously it was set to 4096, which come to think about it, might be a bit over sized. Perhaps

#define ERR_MSG_SIZE 256
char wrmsg_text[ERR_MSG_SIZE];

is enough?


if (!PyArg_ParseTuple(args, "O:write", &obj))
return NULL;
Expand All @@ -193,8 +194,8 @@ SpiDev_writebytes(SpiDevObject *self, PyObject *args)
return NULL;
}

if (len > SPIDEV_MAXPATH) {
snprintf(wrmsg_text, sizeof (wrmsg_text) - 1, wrmsg_listmax, SPIDEV_MAXPATH);
if (len > spi_blk_maxsize) {
snprintf(wrmsg_text, sizeof (wrmsg_text) - 1, wrmsg_listmax, spi_blk_maxsize);
PyErr_SetString(PyExc_OverflowError, wrmsg_text);
return NULL;
}
Expand Down Expand Up @@ -242,7 +243,8 @@ PyDoc_STRVAR(SpiDev_read_doc,
static PyObject *
SpiDev_readbytes(SpiDevObject *self, PyObject *args)
{
uint8_t rxbuf[SPIDEV_MAXPATH];
const uint32_t spi_blk_maxsize = get_xfer_block_size();
uint8_t rxbuf[spi_blk_maxsize];
int status, len, ii;
PyObject *list;

Expand Down Expand Up @@ -284,7 +286,7 @@ SpiDev_writebytes2_buffer(SpiDevObject *self, Py_buffer *buffer)
int status;
Py_ssize_t remain, block_size, block_start, spi_max_block;

spi_max_block = get_xfer3_block_size();
spi_max_block = get_xfer_block_size();

block_start = 0;
remain = buffer->len;
Expand Down Expand Up @@ -380,7 +382,7 @@ SpiDev_writebytes2_seq(SpiDevObject *self, PyObject *seq)
return NULL;
}

spi_max_block = get_xfer3_block_size();
spi_max_block = get_xfer_block_size();

bufsize = (len < spi_max_block) ? len : spi_max_block;

Expand Down Expand Up @@ -475,7 +477,8 @@ SpiDev_xfer(SpiDevObject *self, PyObject *args)
memset(&xfer, 0, sizeof(xfer));
#endif
uint8_t *txbuf, *rxbuf;
char wrmsg_text[4096];
const uint32_t spi_blk_maxsize = get_xfer_block_size();
char wrmsg_text[spi_blk_maxsize];

if (!PyArg_ParseTuple(args, "O|IHB:xfer", &obj, &speed_hz, &delay_usecs, &bits_per_word))
return NULL;
Expand All @@ -492,8 +495,8 @@ SpiDev_xfer(SpiDevObject *self, PyObject *args)
return NULL;
}

if (len > SPIDEV_MAXPATH) {
snprintf(wrmsg_text, sizeof(wrmsg_text) - 1, wrmsg_listmax, SPIDEV_MAXPATH);
if (len > spi_blk_maxsize) {
snprintf(wrmsg_text, sizeof(wrmsg_text) - 1, wrmsg_listmax, spi_blk_maxsize);
PyErr_SetString(PyExc_OverflowError, wrmsg_text);
Py_DECREF(seq);
return NULL;
Expand Down Expand Up @@ -644,7 +647,8 @@ SpiDev_xfer2(SpiDevObject *self, PyObject *args)
memset(&xfer, 0, sizeof(xfer));
Py_END_ALLOW_THREADS
uint8_t *txbuf, *rxbuf;
char wrmsg_text[4096];
const uint32_t spi_blk_maxsize = get_xfer_block_size();
char wrmsg_text[spi_blk_maxsize];

if (!PyArg_ParseTuple(args, "O|IHB:xfer2", &obj, &speed_hz, &delay_usecs, &bits_per_word))
return NULL;
Expand All @@ -661,8 +665,8 @@ SpiDev_xfer2(SpiDevObject *self, PyObject *args)
return NULL;
}

if (len > SPIDEV_MAXPATH) {
snprintf(wrmsg_text, sizeof(wrmsg_text) - 1, wrmsg_listmax, SPIDEV_MAXPATH);
if (len > spi_blk_maxsize) {
snprintf(wrmsg_text, sizeof(wrmsg_text) - 1, wrmsg_listmax, spi_blk_maxsize);
PyErr_SetString(PyExc_OverflowError, wrmsg_text);
Py_DECREF(seq);
return NULL;
Expand Down Expand Up @@ -783,7 +787,7 @@ SpiDev_xfer3(SpiDevObject *self, PyObject *args)
return NULL;
}

bufsize = get_xfer3_block_size();
bufsize = get_xfer_block_size();
if (bufsize > len) {
bufsize = len;
}
Expand Down