From books (double check with chatGPT), the steps for random forest can be summaried as below:
- Draw a random bootstrap sample of size n (randomly choose n examples
from the training dataset with replacement).- Grow a decision tree from the bootstrap sample. At each node:
a). Randomly select d features without replacement.
b). Split the node using the feature that provides the best split according
to the objective function, for instance, maximizing the informationgain.
3. Repeat the steps 1-2 k times.
…
My question is from the bold words: now that the features are selected without replacement, if all features are selected, will the tree stop grow (because no features for selection)? – it seems this is not real. But how to understand this step?