Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC for histogram CPU implementation #1930

Open
wants to merge 26 commits into
base: main
Choose a base branch
from

Conversation

danhoeflinger
Copy link
Contributor

Adds an RFC for histogram CPU implementation.

Signed-off-by: Dan Hoeflinger <[email protected]>
Signed-off-by: Dan Hoeflinger <[email protected]>
Signed-off-by: Dan Hoeflinger <[email protected]>
Signed-off-by: Dan Hoeflinger <[email protected]>
Signed-off-by: Dan Hoeflinger <[email protected]>
Signed-off-by: Dan Hoeflinger <[email protected]>
Signed-off-by: Dan Hoeflinger <[email protected]>
Signed-off-by: Dan Hoeflinger
<[email protected]>
Signed-off-by: Dan Hoeflinger <[email protected]>

* Is it worthwhile to have separate implementations for TBB and OpenMP because they may differ in the best-performing implementation? What is the best heuristic for selecting between algorithms (if one is not the clear winner)?

* How will vectorized bricks perform, and in what situations will it be advantageous to use or not use vector instructions?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, these are interesting, and mainly follow similar lines to the approaches outlined here. I will look more into the pcwar SIMD instruction to understand if that is viable to use here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While its interesting to explore, I believe we basically depend on OpenMP simd to provide our SIMD operations. We won't be able to serve more complex intrinsic operations if we want to stick to that.

Further, I'm not actually finding anything like the pcwar instruction(s) they refer to here in https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html. I do find some for conflict detection across lane which could be useful, but again those won't be available through the interface of OpenMP.

I think our options are basically to decline SIMD or to have duplicates of the output for each SIMD lane. I think even that may be more "hands on" with SIMD details than we have done thus far from oneDPL.

Signed-off-by: Dan Hoeflinger <[email protected]>
more formatting fixes

Signed-off-by: Dan Hoeflinger <[email protected]>
@akukanov
Copy link
Contributor

akukanov commented Nov 6, 2024

Overall, this all sounds good enough for the "proposed" stage, where it's expected that some details are unknown and need to be determined. I am happy to approve it but will wait for a few days in case @danhoeflinger wants to update the document with some follow-up thoughts on the discussion.

Signed-off-by: Dan Hoeflinger <[email protected]>
@danhoeflinger
Copy link
Contributor Author

Overall, this all sounds good enough for the "proposed" stage, where it's expected that some details are unknown and need to be determined. I am happy to approve it but will wait for a few days in case @danhoeflinger wants to update the document with some follow-up thoughts on the discussion.

I've made a few more minor adjustments, but I think I'm happy with the state of the RFC now. The question I have is what is the next best step once we merge. I will be working on the rough implementation (already in progress), but should I also open a new draft PR moving this from proposed to supported as a location for further discussion? Or should I merely open PRs in-place if there are significant changes motivated by the experience in implementation?

@akukanov akukanov added the RFC label Nov 13, 2024
Signed-off-by: Dan Hoeflinger <[email protected]>
SIMD + implementation

Signed-off-by: Dan Hoeflinger <[email protected]>
akukanov
akukanov previously approved these changes Dec 18, 2024
Copy link
Contributor

@akukanov akukanov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I approve it for the implementation as a supported functionality, with the following understanding of the proposed design:

  • For multithreaded execution with par and par_unseq, the implementation uses the existing parallel_for backend pattern.
  • Unsequenced policies do not enable vectorization when thread-local histograms are built (the 1st stage of the algorithm) but do apply for combining those histograms into the final result (the 2nd stage).

The following questions I believe will need to be answered:

  • Which extensions are needed in the parallel backend API to facilitate the use of enumerable storage for per-thread temporary buffers?
  • Will the serial backend also allocate a temporary buffer, or will it directly accumulate into the output sequence and run just the 1st stage? If the latter - how this difference between backends will be handled on the pattern level?
  • Should the 2nd stage always run according to the given execution policy, or should it under certain conditions better run serially by the calling thread?

Specification changes are not required.

I would recommend @MikeDvorskiy as the second approver.

of values in each bin, writing to a user-provided output histogram sequence. Currently, `histogram` is not supported
with serial, tbb, or openmp backends in our oneDPL implementation. This RFC aims to propose the implementation of
`histogram` for these host-side backends.

Copy link
Contributor

@MikeDvorskiy MikeDvorskiy Dec 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let me share my thoughts:
In my understanding RFC is not a book... So, I would preferer to have a short, concise and precise description of what is offered, without frills, like a mathematical theorem. For example:

"The oneDPL library added histogram APIs, currently implemented only for device policies with the DPC++ backend. These APIs are defined in the oneAPI Specification 1.4. Please see the
oneAPI Specification for the details. The host-side backends (serial, TBB, OpenMP) are not yet supported. This RFC proposes extending histogram support to these backends."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I've accepted your language here. Thanks.

to use a serial implementation or a host-side parallel implementation of `histogram`. It's natural for a user to expect
that oneDPL supports these other backends for all APIs. Another motivation for adding the support is simply to be spec
compliant with the oneAPI specification.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Due to it is not a story telling, I would suggest omitting introductory expressions like "It may make more sense" or "It's natural for a user to expect"... Only short and exact information.

For example,
"There are many cases to use a host-side serial or a host-side implementation of histogram. Another motivation for adding the support is simply to be spec compliant with the oneAPI specification."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

taken suggestion. Thanks

low computation algorithm which will likely be limited by memory bandwidth, especially for the evenly-divided case.
Minimizing and optimizing memory accesses, as well as limiting unnecessary memory traffic of temporaries, will likely
have a high impact on overall performance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taking into account my shared thought above, I would propose to re-prahse it keeping the main point shorter:

"A histogram algorithm is a memory-bound algorithm. So, the implementation should care of reducing memory accesses and minimizing temporary memory traffic."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taken mostly. Thanks

@danhoeflinger
Copy link
Contributor Author

danhoeflinger commented Dec 19, 2024

I plan to do a pass cleaning up and also addressing some of what Alexey mentioned above today / tomorrow morning. I will also try to cut down verbosity.

I also have a draft PR up for the implementation which is a work in progress. #1974
I'll add this link to the RFC with my next changes.

histogram is for the number of elements in the input sequence to be far greater than the number of output histogram
bins. We may be able to use that to our advantage.

### Code Reuse
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we can this topic omit at all. It tells nothing about 'histogram', just general wording, which can be applied for any new feature in oneDPL...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed some of the general language and added something which is important for histogram in an attempt to answer feedback from @akukanov to clarify where the implementation of the algorithm will live.

Our goal here is to make something maintainable and to reuse as much as we can which already exists and has been
reviewed within oneDPL. With everything else, this must be balanced with performance considerations.

### unseq Backend
Copy link
Contributor

@MikeDvorskiy MikeDvorskiy Dec 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"unseq Backend"
Basically, we don't have such back end officially. Yes, sometimes we used such term in the internal communication as for "name" for a set of functions with "pragma simd" implementation. But we did not specify and publish API for that. So, I suggest renaming this topic to "SIMD/openMP SIMD Implementation" f.e.

Copy link
Contributor

@akukanov akukanov Dec 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this part is about developing (or not) an implementation for unsequenced policies.
I do not mind calling it `unseq backend" in the design docs, but Mikhail is correct that it's rather informal (while parallel backend is somewhat more formal).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the proposed name for the section better anyway for what is discussed. Thanks.

Finally, for our below proposed implementation, there is the task of combining temporary histogram data into the global
output histogram. This is directly vectorizable via our existing brick_walk implementation.

### Serial Backend
Copy link
Contributor

@MikeDvorskiy MikeDvorskiy Dec 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/oneapi-src/oneDPL/pull/1930/files#diff-fb5f6394ad0d350d719b9f31b139fa60c347ec64795c78e56875c4f002aeb0e7R25
We already have the key requirements topic where we enumerate all backends that we propose to support.
It is good enough I think, and we also can omit this topic "Serial Backend".

Explanation what is "Serial Backend" means as the others backends mean, is a kind of "oneDPL general description" and not related to RFC for histogram feature, IMHO.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With some recent changes, there is some specifics about the serial implementation I wanted to add here so I've kept the section.

Signed-off-by: Dan Hoeflinger <[email protected]>
address feedback

Signed-off-by: Dan Hoeflinger <[email protected]>
Signed-off-by: Dan Hoeflinger <[email protected]>
Signed-off-by: Dan Hoeflinger <[email protected]>
policies types, but always provide a serial unvectorized implementation.

## Existing Patterns

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we intent to give some information about OneDPL parallel backend patterns on which histogram can based on, I would notify, there is not "count_if" pattern, there is "reduce"("transform_reduce") pattern.
When a man says "reduce", it becomes more or less obvious that histogram calculation based on reduce is not effective at all.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I clarified the language a little here to make it more clear that copy_if uses reduce internally. I still think it deserves some text describing it as it may not be immediately obvious to everyone that reduce is not well matched.

time.

### Other Unexplored Approaches
* One could consider some sort of locking approach which locks mutexes for subsections of the output histogram prior to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, I have a curiosity question. Which approach does NVidia use?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NVidia has a similar API within CUB but not within Thrust, and therefore does not have a CPU implementation that I am aware of, only one specifically for a GPU device.

cases which are important, and provides reasonable performance for most cases.

### Embarrassingly Parallel Via Temporary Histograms
This method uses temporary storage and a pair of embarrassingly parallel `parallel_for` loops to accomplish the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does Embarrassingly Parallel term mean?

Copy link
Contributor

@MikeDvorskiy MikeDvorskiy Dec 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Update: got it.. https://en.wikipedia.org/wiki/Embarrassingly_parallel

  2. Of course, if you are solving a concrete task and you are allowed to use the all machine recourses, and there are no any other workloads on the node, the best way for histogram calculation - to make static dividing of amount of work, each thread is calculating a local histogram, and after the local histograms are reducing into one.

  3. But, talking about parallelism in a kind of general library we have to keep in mind that a final user's application can work in "different circumstances", depends on their application type, task, real-time data, other workloads on the same host and other many things..
    When we were developing TBB backend we kept in mind that things and preferred to use TBB auto partitioner (instead of static f.e).
    Also composability reasons make sense here.

  4. BTW, have you considered a "common parallel reduce" (in general) pattern (and tbb::parallel_reduce pattern, in particular) for histogram calculation? It seems the parallel histogram calculation matches on the common reduce (with a certain "big" grainsize): each Body calculates a local histogram (bins), Combiner summaries the all local bins into final ones.
    Additionally, if number of bins is "big" we can apply the second level of parallelism within Combiner code - SIMD or even "parallel_for" and SIMD, if number of bins is "too big".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Yes, although I think there is no reason here to do static division of work, but rather rely upon our existing parallel_for implementation to provide a good composable implementation.

  2. Agree, which is why the intent is to use the existing parallel_for structure (including partitioners) to implement the parallelism. If we were to do it from scratch, we would do it in a similarly composable way, but better to rely upon existing infrastructure

  3. Yes, I thought about this. For TBB and even more for openMP the built in reduction functionality is geared toward very simple lightweight types as the reduction variable where we may have an arbitrarily large array. Especially since we want a unified implementation, it does not seem like these backend are really set up to handle these large reduction variables. It seems we should take more control to ensure no unnecessary copies are made, and that the final combination is done performantly, based on knowledge we have of the task. The implementation remains quite simple and unified.

This method uses temporary storage and a pair of embarrassingly parallel `parallel_for` loops to accomplish the
`histogram`.

For this algorithm, each parallel backend will add a `__thread_enumerable_storage<_StoredType>` struct which provides
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Why a TLS is used, not the global memory with "thread id" as a key? F.e. bins[thread_id]?
  2. Does TBB guarantee that the same threads finalize the work? "The same threads" means the threads which have started the work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. There are a number of reasons:
    a) My understanding is that in TBB while it may be technically possible to get the the local thread id within an arena, it is an undocumented API and generally discouraged and against the TBB mindset. Using TLS seems to be the preferred method specifically with TBB.
    b) While what you suggest perhaps fits better within OpenMP, we want to create a single implementation and not require a __parallel_histogram within every current and future backend, but rather depend upon existing functionality within the backend as much as we can (in this case __parallel_for).
    c) With smaller values of n, num_bins and larger number of threads, not all threads should be used because of the overhead associated with allocation and initialization of each temporary bin copy. We can let the partitioner decide how many blocks to employ, but we want to avoid unnecessary allocation and initialization overheads wherever possible.

I will mention a downside for completeness, but it is outweighed here in my opinion:
It requires implementation of a thread local storage class for each backend. This is only non-trivial for OpenMP. It has been written generically though to serve future patterns though so it is nice to have.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I'm not exactly sure what you mean here by "finalize the work". If you mean the second parallel for, then no, we are explicitly parallelizing over a different dimension (num_bins), and accumulating across the temporary histograms which were used from different threads. TBB does guarantee that each thread will always use its own TLS for each grain of work though, when retrieved through local().

Signed-off-by: Dan Hoeflinger <[email protected]>
Signed-off-by: Dan Hoeflinger <[email protected]>
Signed-off-by: Dan Hoeflinger <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants