'Turning 1-D array into a multi-channel matrix using delay process

The single channel received data is a one dimensional matrix. That's basically my audio input. I need to expand the dimension of this matrix to apply some multi-channel algorithms on it. The paper (Improvement of Source Number Estimation Method for Single Channel Signal) I read tells me they achieved this by using the delay process, where the single channel data is denoted as:

y(n), n=1,2,...,L and the received signal as yi = y(n+(i-1)d)

With this, a N-channel received data is formed as following: Y =

[y(n)
 y(n+d)
 ..
 y(n+(N-1)d)]

where d is the delay length of each channel. (I'm sorry about the bad matrix, I'm new to this) However, I'm having trouble converting this into python codes. I'm new to coding, and I've been trying for days but I just can't figure out how to do this part. Can someone please help me? Would it be easier if I used Matlab for this?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source