-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove n_outcomes restriction #66
Comments
I guess it also comes up in risk and information gain, but every use still appears to be in |
Agreed, this is a major concern moving forward, and one that we should definitely address. At the moment, I think the only place that |
Talking to Thomas today, it seems that he has basically already solved this issue in his branch The alternative is to always have expectation values calculated in this way, even if there are a small number of outcomes. |
I'll take a more detailed look at @whitewhim2718's branch soon, then. In the meantime, tagging @jarthurgross into this Issue, as the |
For concreteness, I can think of two main usecases that should be addressed by a new design:
|
And don't forget discrete valued but with infinitely many outcomes, such as Poisson! This is mainly what I am interested in at the moment. |
Very good point, I've added that to the list as well, then. |
I'd like to say that while I think I have a solution. My branch Continuous-Outcomes is very experimental right now. One glaring issue is that I added a callback feature in SMC update to reselect the sampled outcomes at every update. Somehow I neglected to actually add this to the abstract continuous model, but added it to my implementation. I will clean this all up, but I may not have the time until next week. I also want to say that I am very much open minded on how to implement these enhancements, and am open to changes. Additionally I have done some other development on the ExperimentDesigner class (among other things) in that branch, that is most likely not relevant to this issue and probably should be split out. |
Somewhat tangential, but what do people think on which version we should target with this? Personally, I think it makes sense to target a new QInfer version 1.1, so that we can get a stable version out there to build off of while working on ensuring that the new design suits all of our needs and is extensible enough to not cause another problem down the road. Pushing to 1.1 would also fit will with some ongoing work on offline experiment design, as v1.1 could then focus on generalized outcomes and experiment design as the two big new features. Thoughts? |
I agree that the release of v1.0 should be prioritized over this feature. I think it is going to take some discussion and consensus to implement, and realistically that will take at least a month. These changes should be able to be implemented without breaking anything, or at least very little (the current implementation I have is backwards compatible, but maybe not the best way to go if we think long term). I know I haven't been participating in the recent flurry of work (I haven't gotten to code at all in the last month) but I am looking forward to v1.0 to build off of. |
Here's what I'd do, depending on what course of action you want for the implementation:
|
There is one other place that the assumption of integer outcomes gets used, and that's in the |
I am interested in this issue because my research currently involves the need for poisson outcomes; I am writing the model as this discussion takes place. However, I am calculating the risk using another language. So I don't currently need the risk or information gain functions, and I think in this case, I can just put nonsense in for In terms of breaking changes, after having thought about it a bit, I think (but you never know until you try) that there are very uncompromising ways to fix this issue without breaking anything. The only thing I would be tempted to break right now is the class name of |
Given a discussion with @cgranade yesterday, these are the proposed structure changes to abstract_class (which contain some breaking changes) for version 1, to make future integration with feature-generalized-outcomes less invasive: (I am writing this here so that any criticisms can be made before someone puts the much greater effort into a PR)
The idea is that functions like the bayes risk in Prototypical scenarios to consider (add more if you have them):
|
I agree with everything said here. With regards to my branch which contains my current work on this issue, not many changes will have to be made to come in line with the above. edit - I was mistaken on the scope of this Issue. I had not realized that these changes were in preparation of the later branch me and @ihincks hope to submit, and not the work itself. Regardless my original statement above still stands. |
#71 resolves the preparation step. |
Just want to put this here, not necessarily because it is urgent, but because it has come up as a problem for a few people.
Models have a method called
n_outcomes
which specifies the number outcomes in an array of expparams. Sometimes models do not have a finite number of outcomes. For example, measurement of NV centers involves drawing Poisson random variates, which has an infinite but discrete number of outcomes. Or, measurements in NMR are continuous valued.Correct me if I am wrong, searching all source files as of 2c86c94 for the string
n_outcomes
, the only file in which it is being used rather than being defined is in the updater code ofsmc.py
.The text was updated successfully, but these errors were encountered: