The scarcity of annotated datasets remains a significant impediment to the advancement of Natural Language Processing (NLP) in low-resourced languages. In this work, we introduce a large-scale annotated Tigrinya Named Entity Recognition (NER) Corpus, along with models for the NER system in the Tigrinya language. Our manually constructed Tigrinya NER dataset comprises over 200K words tagged for NER, with over 118K of the tokens also having Parts-of-Speech (POS) tags, encompassing 8 distinct classes of entities and multiple tagging schemes. We performed extensive experiments covering several recurrent neural networks and state-of-the-art transformer models, achieving highest performance of 90.18% weighted F1-score. These results are particularly notable, given the unique challenges posed by Tigrinya's distinct grammatical structure and complex word morphologies. The system can be an essential building block for the development of NLP systems in Tigrinya and other related low-resourced languages, and can facilitate cross-referencing with higher-resourced languages.