🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Linear interpolation of two vector arrays with different lenghts

Started by
54 comments, last by ManitouVR 4 years, 8 months ago

It is indeed a post-processing function. It does exactly what you described in your original question. You need to pass it a vector of points with associated absolute times (starting at 0, in increasing order), and another vector of just points. It will figure out what times to assign to the second vector so each point closely matches the points you get in the first curve if you interpolate the position at those times. Unfortunately, I didn't find a way to escape having discrete time steps, so you need to pass it a "resolution" parameter that describes how many pieces to break down the interval into.

"How does it work?", you ask. It tries to find the best fit possible where you use the first k "smoothed" points to fit the first t time steps. If you have solved all such problems for smaller values of k and t, it's easy to do: The last point gets assigned time t, and you loop over previous time steps to see what is the best value for the previous point. So I just loop over values of k, for each value of k loop over values of t, and for each (k,t) solve the problem (which involves yet another loop). The time complexity is O(n_smoothed_points*resolution*(resolution+log(n_original_curve_points)), and the intermediate memory used is O(n_smoothed_points*resolution), I think. The "log(n_original_curve_points)" comes from the binary searches into the original curve to find the position at a given time by interpolating the points of the original curve.

I hope it works for you. If you find any issues, try to write a minimal input that shows the problem and post a question here.

 

Advertisement
@alvaro Hello and thanks for your detailed explanation. I'am done for now with optimizing my previous code and i integrated your code as well.
I didnt touch your code at all but made some extra vectors to be able to pass the data to your function.
Well. It works. But unfortunatly only just as long as the curve does not intersect itself.
I made a video here --> LINK
 
@Zakwayda Since i cleaned up my code, i'am in the state now, that the RamerDouglasPeucker reduction function returns not only xy but also the absolute time values of the leftover points/vertices.
I also ditched the delta times in the structs since its not really necessary for the playback. So i have absolute times now.
 
 

3D Artist

Thanks for sharing your progress ?

For my part, I'll continue to advocate for doing the interpolation as part of the smoothing process if possible, because it should be more efficient, it's arguably more parsimonious, it doesn't require quantization or other workarounds, and it shouldn't have any issues with complex arbitrary paths beyond any issues that the underlying smoothing algorithms have. Plus, it sounds like you're partway there, given that your RDP code now preserves the ancillary point data.

That said, I was under the impression (perhaps mistakenly) that alvaro's post-processing solution is intended to be able to handle self-intersecting paths, so if you've found a failure case, it might be interesting to post it here. (If it were my code, I'd certainly be curious about any failure cases.)

I agree with Zakwayda: It would be better to just never forget the timestamp of each point, and carry it through smoothing steps; and I am interested in an example where my code is not doing the right thing.

Are you sure the time of the first point in each of the traces is given to my code as 0? Try making a self-intersecting trace as the first trace, and send me the data where my algorithm failed, please.

 

 

@alvaro Okay, i made a new VIDEO where i draw a self intersecting trace and it fails at the first time.
I'am sure that the data given to your code start with zero. Or else it would not work sometimes when its not intersecting.
I believe that, WHEN its intersecting (1st video) it just looked like its intersecting but didnt actually hit anoter previous coordinate point. I had Line style ON. So SFML drew a solid line but just went inbetween 2 previous vertices and actually didnt hit one directly. At least this is how i imagine it...
I also switched the Line Style of SFML back to just dots/vertices for better visibility.
I hope that what i did, are the expected actions.
 
@Zakwayda Haha, sounds promising: "Half way there..". I dont exactly understand what you mean by "curious about failure cases".
Do you mean i should provide data that lead to a failure? If so then i just did that. ?
 
The part that i dont understand about solving this problem is: Why do we look at it from a graphical perspective when the actual data is actually a straight "line"? (1D Array/Vector).
I'am sure there is more to it than i understand. But thats just my 2 cents.
 
Like, if the original vector is smaller then the smoothed one. Divide the smoothed one by the original. And then interpolate the segments somehow.
Why do we need to incorporate "physical" distances?
 
 

Data.txt

3D Artist

50 minutes ago, C3D_ said:
Haha, sounds promising: "Half way there..". I dont exactly understand what you mean by "curious about failure cases".
Do you mean i should provide data that lead to a failure? If so then i just did that. ?

By 'curious about failure cases' I meant that I'd be curious in exactly what circumstances the algorithm failed. And yeah, I was suggesting making specific data available, which you did. I will mention one thing about that though:

Quote

...WHEN its intersecting (1st video) it just looked like its intersecting but didnt actually hit anoter previous coordinate point.

Just to be clear, by self-intersecting I don't necessarily mean that one point lands directly on top of another. I just mean that the path formed by the points intersect itself (which may involve going 'between' points, as you seem to be describing).

Quote

The part that i dont understand about solving this problem is: Why do we look at it from a graphical perspective when the actual data is actually a straight "line"? (1D Array/Vector).
I'am sure there is more to it than i understand. But thats just my 2 cents.
 
Like, if the original vector is smaller then the smoothed one. Divide the smoothed one by the original. And then interpolate the segments somehow.
Why do we need to incorporate "physical" distances?

Just looking at this part:

Quote

Divide the smoothed one by the original.

You'd have to specify what you mean by division in this context, as it's not immediately obvious (to me at least).

As for your question about geometry and physical distances (what you refer to as the graphical perspective), even though the input data consists of 1-d arrays, the contents of the arrays include 2-d positions. Given that at least part of the input is geometrical in nature, it seems natural that the solution to the problem would involve geometry. As such, I'm not sure what a solution independent of geometry (i.e. graphical perspective) would look like. If you can come up with a concrete example though, it'd be interesting to see what you have in mind exactly.

Second take.

Try this as the main loop of the dynamic programming:


  for (size_t last_point = 1; last_point < smoothed.size(); ++last_point) {
    for (unsigned last_time = 0; last_time <= resolution; ++last_time) {
      double error = distance(smoothed[last_point], time_interpolation(original, total_time * ((double)last_time / resolution)));
      double min_error_in_previous_points = 1e20;
      for (unsigned prev_time = 0; prev_time <= last_time; ++prev_time) {
        double error_in_previous_points = fit_quality[last_point - 1][prev_time];
        if (error_in_previous_points < min_error_in_previous_points) {
          min_error_in_previous_points = error_in_previous_points;
          time_of_previous_point[last_point][last_time] = prev_time;
        }
      }
      error += min_error_in_previous_points;
      fit_quality[last_point][last_time] = error;
    }
  }

 

 

 

 

@alvaro Yes, this works now!! But its slow! See video HERE .
It just seems as if the calculation time grows not linearly but exponentially with the lenght of the curve.
I had to feed the smoothcurve.size() into the resolution slot of the function. Because if the size() exeeds the preset value of 1000 the function will crash.
Zwischenablage-1.png.8ad5d4407a6824fd417a08f195b3deab.png
 
So the resolution has to be always >= smoothcurve.size();
 
@Zakwayda Okay, that was just a theory of me. Because sometimes the retime function worked, even when there WAS an intersection. So i assumed that it might have to do with points landing on each other and not just lines crossing other lines.
 
By division i mean: segments = smoothed.size() / original.size()
So then each segment(made of n points) is a representation of one point of the original curve.
 
Or another idea would be to look at the original curve and see where the areas are that have the longest timestamps.
And then somehow using this information to split.
 
 
 
 
 

3D Artist

Actually, I started thinking that the resolution had to be larger than the number of points in the smoothed curve because I thought the times assigned had to be different. But then I didn't impose that condition in the final version of the loop I wrote. So it will probably work with fixed resolution.

Anyway, if your resolution is proportional to the length of the smoothed curve, the execution time will grow as its cube (which is not "exponentially"; these things have definitions).

Whatever happened to the idea of keeping the times associated to the points through the smoothing steps? What's the big impediment there? I only wrote the code for your original question because it was fun, but it seems completely unreasonable to forget the timestamps to then have to recover them in a complicated way.

2 hours ago, C3D_ said:
Yes, this works now!! But its slow!

I realize this is obvious, but are you evaluating a release build? (Accidentally evaluating a debug build is an easy mistake to make.)

In any case, given that the post-processing is likely to be expensive regardless, I'll echo alvaro:

Quote

Whatever happened to the idea of keeping the times associated to the points through the smoothing steps? What's the big impediment there? I only wrote the code for your original question because it was fun, but it seems completely unreasonable to forget the timestamps to then have to recover them in a complicated way.

So, why aren't you doing this as part of the smoothing process? That seems like the obvious (and optimal) solution to the problem at hand.

This topic is closed to new replies.

Advertisement