The emergence of the `code naturalness’ concept, which suggests that software code shares statistical properties with natural language, paves the way for deep neural networks (DNNs) in software engineering (SE). On the other hand, DNNs are vulnerable to adversarial examples (AEs), which lead to adverse model prediction by human imperceptible variations. Although AE generation have been extensively studied in the context of computer vision and natural language processing, it becomes considerably more challenging for the source code of programming languages. One of the challenges is derived from various constraints including syntactic, semantics and minimal modification criteria. These constraints, however, are subjective and can be conflicting with the purpose of fooling DNNs. This paper develops a multi-objective adversarial attack method (dubbed MOAA), a tailored NSGA-II, a powerful EMO algorithm, integrated with CodeT5 to generate high-quality AEs based on contextual information of the original code snippet. Experiments on 5 source code tasks with 10 datasets of 6 different programming languages show that our approach can generate a diverse set of high-quality AEs with promising transferability. In addition, using our AEs, for the first time, we provide insights into the internal behavior of pre-trained models.