I have a NumPy array that is shaped: (54,41,2) How do I change it efficiently ( 57, 41.2) Is it that the three dimensions in the 57 dimension are zero for their values?
I did this, but I'm not sure this is correct:
final_data = np.zeros ((57, 41, 2)) last_data [: small_data.shape [0]] = small_data
np.concatenate
one Good candidate It is clear and it was designed to do this.
Example:
& gt; & Gt; & Gt; Np & gt; & Gt; & Gt; A = np.arange (3 ** 3). Recipe (3,3,3) & gt; & Gt; & Gt; B = np.zeros ((2,3,3))> gt; & Gt; & Gt; Np.concatenate ((a, b), axis = 0) array ([[[0., 1., 2], [3., 4, 5.], [6., 7., 8.]], [[9, 10, 11], [12., 13., 14.], [15., 16., 17.]], [[18., 19., 20.], [21., 22., 23.], [24., 25., 26.]], [[0., 0., 0.], [0., 0., 0.], [0., 0, 0 ]], [[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]])
You have to keep in mind that in oval, arrays are usually stored in a stored fashion. This is the reason that adding an array (in any axis) will result in an new array, because the array needs a new memory location.
Note: In this case, although I am limited to 2D arrays, using vstack
is more logical I < The comparison of / p>
np.concatenate
is in your original view in which you first define an array of array and then overwrite a portion of the array, it includes the arrays in it Depending on the size, speed gain is small in any case:
in [14]: a = np.random.ra [15]:% timeit original_way (a) 100 loops, best 3: 3.77 ms per loop [16]:% timeit concat_way (A) 100 loop, best 3: 2.93 ms per loop [17]: 2.93 / 3.77 out [17]: 0.7771883289124669 in [18]: one = Np.random.random ([1000,100,150]) [19]:% timeit original_e (A) 10 loops, best 3 : 64.6 ms per loop [20]:% timeit concat_way (A) 10 loops, best 3: 64.8 ms per loop
If this is an obstacle in one of your applications, -Disclaimer will search for solutions.
No comments:
Post a Comment