-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I2c peripheral trait #179
Comments
Such a trait might be a bit tricky so set up because essentially this would have to be some sort of callback which gets invoked if some master asserts the bus. I guess the callback is even the easiest part but documenting all the invariants to make sure that HAL implements the I2C slave driver correctly might be a bit hairy... |
FWIW, I am building a decoder for smbus protocol that should be agnostic for polling, interrupt, or dma approach. It requires 4 events: Initiated {direction}, RequestedByte{&mut byte}, ReceivedByte{byte}, Stopped. And it requires a command handler trait where functions for read_byte etc. are collected. I want to ultimately export a macro for annotating functions with #[read_byte_data(0x42)] which should generate the custom code for the protocol handling, like derive(smbus) or derive(i2c). https://github.com/barafael/smbus-slave-state-machine RTIC based WIP implementation: https://github.com/barafael/stm32f0-rtic-smbus-slave (not up to date with my library though). |
I needed this for a project I'm working on and have something that's working well for me. The interface looks like this: pub trait I2cPeripheral {
/// Returns the next event that needs to be handled.
fn next_event(&mut self) -> I2cEvent;
/// Write a byte to I2C. Should only be called when `I2cEvent::NeedByte` is
/// received.
fn write_byte(&mut self, byte: u8);
}
#[derive(Eq, PartialEq, Debug)]
#[non_exhaustive]
pub enum I2cEvent {
None,
/// Emitted when the I2C controller addresses us in read mode.
StartRead,
/// Emitted when the I2C controller addresses us in write mode.
StartWrite,
/// Emitted when the I2C controller sends a stop signal.
Stop,
/// Emitted when the next byte is required. Call `write_byte`.
NeedByte,
/// Emitted when we receive a byte from the I2C controller.
ByteReceived(u8),
/// Emitted when an error occurs on the I2C bus.
Error(I2cError),
}
#[derive(Eq, PartialEq, Debug)]
pub enum I2cError {
Overrun,
Underrun,
BusError,
} I've used this interface both for polling and interrupt driven. For interrupt driven, I initially did it without RTIC, then with RTIC. The i2c interrupt just calls I initially tried to use callbacks, but it got pretty nasty with ownership and lifetimes, since you need multiple callbacks, one for each type of event and those callbacks all need access to some shared data. Perhaps there's a way to make it nice with callbacks, but I've found an event driven model to work really well. If there's interest, I can look to upstream this interface to this crate and at least one implementation (stm32g0xx-hal). I'm just now reading @barafael's comment above, and the design sounds pretty similar. |
Looks neat, I think the event-based communication makes the most sense, though my worry is that it might not be zero-cost (or at least as low-cost as it can be)? There are a few things between y'alls implementations that I think are worth talking about:
The fact that two people have independently come to the same conclusion is pretty reassuring. Do you have a fork repo of embedded-hal that I could use as a basis for seeing if I can implement this on other chips? Are these lines the only ones you added to the HAL repo? |
There is also this pull request on stm32f0xx-hal: stm32-rs/stm32f0xx-hal#124 which also has a similar design. |
Regarding the buffer size, we could totally "const genericize" it :) I just wasn't aware at the time, and 32 bytes is the maximum block transfer length of SMBus. I2C is not bounded. I think the nice thing about @davidlattimore and my implementations is that they are completely independent of polling/interrupt/DMA and could even work with bitbanged I2C peripherals. We should consider if our implementations really should belong to a HAL. Maybe just the hardware initialization for an I2C peripheral in "slave mode" should be part of the HAL, and any SMBus/I2C slave library can work with that. Just a thought. |
Regarding optimisation of an event-based API, I think it might be possible for the compiler to make it close to zero cost. If I currently have the code that I pasted above in a separate, internal crate, but the above code is it for the HAL side. I'm unsure about I avoided buffering at this layer and put my buffer(s) in the protocol layer, since my protocol layer knows how many bytes it expects to send and receive. It should be possible to have a Something else worth mentioning is that with a couple of extra methods on the trait ( |
Are you suggesting splitting this trait into a separate crate? What would be the hypothetical benefit of that? Modularity?
Ah, I see, I wasn't thinking about volatility at all. I think I like the Glad to see y'all thinking about and discussing this. I intended (read: vaguely considered) PRing what I've been using, but it's really silly compared to both of your implementations |
Not the trait, I think, the "trait" being something like what @davidlattimore is proposing. In my crate, there is no such trait (is hardware-agnostic). It just helps with en/decoding the messages sent over some smbus, because I found myself doing it over and over again manually. The benefit would be flexibility, modularity, and keeping the HAL free of specific implementations. |
What do folks think about how errors should be handled in this kind of trait? In the trait I pasted above, I included an explicit error enum in with the trait definition. The existing I wonder if a happy medium might be allow implementations to supply their own error type, but to require that error type implement Into<embedded_hal::I2cError>. pub trait I2cPeripheral<E: Into<I2cError>> {
fn next_event(&mut self) -> I2cEvent<E>;
...
}
pub enum I2cEvent<E: Into<I2cError>> {
...
Error(E),
} That way implementations of the trait can chose whether to just use embedded-hal's I2cError, or their own error type and users of the trait can chose whether to preserve the original error type or convert to embedded-hal's I2cError. Thoughts? |
Something else I've been thinking about is how best to handle pub enum I2cEvent {
...
StartRead(ByteRequest),
...
NeedByte(ByteRequest),
} pub trait I2cPeripheral {
fn write_byte(&mut self, byte: u8, request: ByteRequest);
} I'm unsure if there's hardware where sending the first byte when you get a start event isn't the thing to do. If there are, then we could make it If we instead say that |
Makes sense, maybe this is worth creating a separate issue altogether? One thing that I'm not sure about is how it would work with the Also should the error go in a
Shouldn't the error type be an associated type rather than generic? Like pub trait I2cPeripheral {
type Error: Into<I2cError>;
...
Is there any mechanical difference between Also, if you have to call
This seems like something that could be done in fn write_bytes(&mut self, bytes: &[u8]) -> Result<(), Self::Error> {
let mut idx = 0;
loop { // probably better to make this an iterator or something
if self.i2c.isr.read().txe().is_empty() {
self.i2c.txdr.write(|w| w.txdata().bits(bytes[idx]));
idx += 1;
if idx == bytes.len() { break; }
}
}
Ok(())
} |
Although there is an interface for an i2c blocking master, a corresponding slave interface is missing.
Are there any ideas or work already done on this?
The text was updated successfully, but these errors were encountered: