During code reviews, an essential step in software quality assurance, reviewers have the difficult task of understanding and evaluating code changes to validate their quality and prevent introducing faults to the codebase. This is a tedious process where the effort needed is highly dependent on the code submitted, as well as the author’s and the reviewer’s experience, leading to median wait times for review feedback of 15-64 hours. This paper aims to improve the velocity and effectiveness of code reviews by predicting three tasks of review activity at code submission time: which parts of a patch need to be (1) commented, (2) revised, or (3) are hotspots (will be commented or revised). We evaluate two different types of text embeddings (i.e., Bag-of-Words and Large Language Models encoding) and review process features (i.e., code size-based and history-based features) to predict the tasks. Our empirical study on three open-source and two industrial datasets shows that combining the code embedding and review process features leads to better results than the state-of-the-art approach. F1-scores (median of 40-62%) are significantly better compared to the state-of-the-art for all tasks (from +1 to +9%). Furthermore, we found that size-based review process features improve the most performance for all datasets, whereas history-based features are found less important, though they still improve performance.