Conversation
|
Hey this is David can we talk about this? You can access each element of the concurrent_queue in parallel. you just need to use the unsafe_begin(), and unsafe_end() functions. As long as we are just accessing them and not changing them this is just fine. Also I though we were going return a vector3 with the direction of the closest element. I can make these changes quickly if you like. Just tell me what you want. |
|
We need to use 'chunkmanager' not 'world' so that we dont need to go though every block. #include "Compass.h" #include int modx(int s){ int mody(int s){ int modz(int s){ Compass::Compass(chunkmanager input, glm::vec3 pos) struct CheckType glm::vec3 Compass::find(BlockType type) while (distance<=worldSize/2) struct populatej { }; struct populateQueue { tbb::concurrent_queueglm::vec3 Compass::getQueue( glm::vec3& start,int dis,int worldSize) { |
|
Thanks for making those changes, sorry for not responding sooner. I've been studying all day for Koebbe's final tomorrow... Sorry, we had made changes from our original plan and I thought we had just wanted to return the distance from a certain point, not the vector, but I was wrong. I've been building the code on my linux box so I'll be able to test everything. |
|
cool thanks. |
David and I (Soren) have added a Compass class to the project. It has not been added to the GUI but when you implement it will give the distance from your current position to the closest material of your choosing. Unlike our original proposal the algorithm does not calculate the shortest walking distance.
We check the area around a person for a certain material, starting at distance one. If that area does not contain the material then increase the distance of our search area by one. We continue this until we have found the material or have searched the entire world and failed to find the material. We have found three places in our algorithm where we have obtained speedup by using tbb::parallel_for. We populate the queue of points to check in parallel. After which we check each point is the material we want in parallel. We then have to check the Boolean array to see if the material was found at the current distance. This gives us O(n * q) where n is the number of blocks in the world and q is the size of the queue. Having to access each element of the concurrent_queue one at a time slowed down the function more then we would have liked.