While I was studying about Compiler Design it tells that we need ‘finite automata’ while designing a lexical analyzer like DFA or NFA. So I would like to know whether NFA is only used for conversion of (regular expressions to NFA and then to DFA). Is it possible to realize NFA practically? Or whether NFA is used because it is efficient than DFA?
Realizing an NFA in practice means backtracking, since your program flow cannot be in multiple different states simultaneously. (There are parallel processors, of course, but it’s hard to map the irregular and chaotic behaviour of user-written regexes to the straightforward, regular data flow that vectorization units are designed for.) Therefore, for deciding a pure “Does this match?” question, it’s almost always better to transform the NFA further into a DFA and run that. But NFAs are still used in practice because DFAs cannot do some things that users definitely want, such as capturing subgroups of an expression.
6