-
-
Notifications
You must be signed in to change notification settings - Fork 437
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement distributions for f32 #100
Comments
This is a client side work around. #[derive(Copy, Clone, Debug)]
pub struct F32<S>(pub S);
impl<S> Sample<f32> for F32<S>
where S: Sample<f64>
{
fn sample<R>(&mut self, rng: &mut R) -> f32 where R: Rng {
self.0.sample(rng) as f32
}
}
impl<S> IndependentSample<f32> for F32<S>
where S: IndependentSample<f64>
{
fn ind_sample<R>(&self, rng: &mut R) -> f32 where R: Rng {
self.0.ind_sample(rng) as f32
}
} |
I think this is doable, but would require re-implementing quite a few algorithms, e.g. the The real question is: is there a good reason to use f32 implementations? On GPUs, yes, but on CPUs, I'd have thought it would be better to do the computation in |
In principle, it should be faster, especially if the code can be vectorized. However, the current API generating one float at a time is not well suited for vectorization. |
No, it's not — but nor are many applications. If performance improvements are a goal then one should start with a specific problem needing (and able to take advantage of) the optimisations. Also, I don't expect |
I'm not so sure about that. I think a lot of applications could benefit from generating 4 floats at a time (for instance, rejection sampling and sampling from the unit sphere). |
@pitdicker I think it would be straightforward to add an f32 variant if we wanted to? Yes, I think so to. It would mean updating the I always thought of this issue as something nice that we should do, with just the question of 'when' remaining. We definitely should figure out how to make better use of vectorisation someday though. @fizyk20 As you are already working on/with the distributions code, what do you think? Or would you be interested? |
@pitdicker I don't really have an opinion here 😁 f32 variants don't make too much sense for Poisson and binomial distributions, which I've been involved with, as they are inherently discrete. It might make sense to make u32/u16/u8/i64/...etc. variants, though - but this would probably mean that more type annotations would be required in code using the API (harder for type inference when there are so many similar possibilities). |
If one is targeting an |
So we have some motivation for this at least, though I'm still not convinced (@peterhj's case can simply use an external implementation as required). If we do want to add this, what should support look like?
pub struct Normal<FP> {
mean: FP,
std_dev: FP,
}
impl<FP> Distribution<FP> for Normal<FP>
where FP: Copy + Add<Output=FP> + Mul<Output=FP> + From<f64>
{
fn sample<R: Rng + ?Sized>(&self, rng: &mut R) -> FP {
let n = rng.sample(StandardNormal);
self.mean + self.std_dev * n.into()
}
} The second method also has annoying error messages; we try to avoid stuff like this if we can:
(problem here is the impl requires |
A stronger motivation for this is generic code. nalgebra implements |
It should be straightforward to
Edit: I think we could actually support both the first and last options simultaneously. That might not be the best idea, but means we could implement But the big question here for parameterised distributions is this: do we actually need the parameters to be |
Another option might be changing Using |
Do associated types help anywhere? We already have to deal with the fact that a distribution may implement sampling for multiple types, thus requiring: let x = <Standard as Distribution<f32>>::sample(&Standard, &mut rng);
// but usually we just write:
let x: f32 = Standard.sample(&mut rng); Further investigation into distributions:
I guess the right approach here is to make most of these types generic over their main parameter type ( |
Normal and so on are only implemented for f64 at the moment.
It would simplify and beautify code that integrates with Rand greatly if they could be implemented for f32 too.
The text was updated successfully, but these errors were encountered: