The world has seen the great success of deep neural networks (DNNs) in a massive number of artificial intelligence (AI) applications. However, developing high-quality AI services to satisfy diverse real-life edge scenarios still encounters many difficulties. As DNNs become more compute-and memoryintensive, it is challenging for edge devices to accommodate them with limited computation/memory resources, tight power budgets, and small form-factors. Challenges also come from the demanding requirements of edge AI, requesting real-time responses, high-throughput performance, and reliable inference accuracy. To address these challenges, we propose a series of efficient design methods to perform algorithm/accelerator co-design and co-search for optimized edge AI solutions. We demonstrate our proposed methods on popular edge AI applications (object detection and image classification) and achieve significant improvements than prior designs.