In this paper, we propose an approach for designing high-performance energy-efficient processing elements (PEs) using statically-scheduled nanocode-based architectures. Our approach is based on bottom-up refinement/trimming techniques that optimize a given datapath irrespective of whether it was designed manually or generated automatically. The optimizations can also preserve parts of the netlist specified by the designers, and hence, allow reuse of design efforts and can lead to predictable convergence. In this paper, we show that trimming unused and underutilized resources of typical general-purpose datapaths can lead to 30-40% average energy savings, without any performance loss. However, general-purpose architectures often compromise parallelism to make the design implementable. With our trimming approach, we can afford to have a base architecture that is not intended for implementation and has more parallelism, and then apply refinement to make it implementable. For our benchmarks, we achieved up to 1.8 times (avg. 25%) and 2.6 times (avg. 40%) performance improvement, compared to two general-purpose architectures (i.e. a 4-issue VLIW and a DLX), respectively. Additionally, the energy consumption is reduced by up to 5 times (avg. 2 times) compared to the trimmed general-purpose architectures. [ABSTRACT FROM AUTHOR]