The interconnection network is a subsystem evermore important in current Datacenters and supercomputers where data- and computing-hungry applications, used in high-performance computing (HPC), artificial intelligence (AI), or Cloud fields, demand a growing number of communication operations among the server hosts. These demands require that the network subsystem guarantees high communication bandwidth and low latency, otherwise becoming the bottleneck of the entire system. There are numerous interconnection network technologies that are evolving their architectures to accomplish these demands, such as InfiniBand or Ethernet-based networks. In particular, Ethernet-based interconnection networks have improved in the last few years their performance, in terms of latency and throughput, so they have become a competitive and popular choice for Datacenters and Supercomputers.Indeed, there are several open hardware projects, such as Corundum and NetFPGA, which provide interconnection network models that can be implemented in commodity FPGAs. Thanks to these projects the community is contributing to developing novel and efficient network functionalities. In this paper, we describe the process of building an FPGA-based prototype for an Ethernet-based interconnection network.Specifically, we use development boards from the NetFPGA platform, composed of a Xilinx Virtex-7 690T FPGA, internal flash memory, 10GbE transceivers, etc.In these FPGAs, we have implemented open-source designs (NICs and switches) from NetFPGA and Corundum projects. The cabling of the prototype is flexible so it allows measuring the performance of NIC-to-NIC and NIC-to-switch communication. We have also evaluated several transport protocols (i.e., TCP and UDP) using commonly used benchmarks and tools, and analyzed the different configurations of the software stack to reduce the packet dropping in the communication.