Fuzzy logic has become a very good choice to represent uncertain models of complex systems that cannot be easily represented in terms of conventional mathematics. Specifically fuzzy hardware has turned to be the choice to reach high speed inference rates. There are two forms to represent membership values universe: first one is when floating point numbers are used, second form is when integer universe is used. In the first case result of operations belongs to [0,1], in the second case results belong to interval of integers [0, m], where m is an integer that can be represented by a different number of bits according to application resolution demands. This case is fully compatible with digital computers because floating point operations consume much more time and resources than integer operations. This paper presents an algorithm to implement fuzzy logic inference engine for discrete implementations and new method of defuzzification procedure, with the objective of reducing the number of instructions to be executed, having as consequence fewer processing time and less resources consumed. Simulation results for new methods are considered.