HDFS is a distributed files system and HDFS is cluster (one machine or lots of machine) specific and once file is at HDFS you loose the machine or machines concept underneath. And that abstraction is what makes it best use case. If file size is bigger then replication block size the file will be cut into block size and based on replication factor, those blocks will be copied to other machine in your cluster. Those blocks move based on
In your case, if you have 3 nodes cluster (+1 main namenode), your source file size is 1 MB, your replication size is 64MB, and replication factor is 3, then you will have 3 copies of blocks in all 3 nodes consisting your 1MB file however from HDFS perspective you will still have only 1 file. Once file copies to HDFS, you really dont consider the machine factor because at machine level there is no file, it is file blocks.
If you really want to make sure for whatever reason, you can do is set the replication factor to 1 and have 1 node cluster which will guarantee your bizarre requirement.
Finally you can always use FSimage viewer tools in your Hadoop cluster to see where the file blocks are located. More details are located here.