Trust negotiation is an approach to establishing trust between strangers through the bilateral, iterative disclosure of digital credentials. Under automated trust negotiation, access control policies are associated with sensitive credentials to control under what circumstances those credentials can be disclosed. Ideally, the information in a user's sensitive credential should not be known by others unless the corresponding policy is satisfied. However, the original model for user interaction in trust negotiation has pitfalls which can be easily exploited to infer one's private information, even if access control policies are strictly enforced. To preserve one's privacy, a more flexible interaction model for trust negotiation is required. On the other hand, it is also desirable for two parties to be able to establish trust whenever possible. There is potentially a conflict between privacy preservation and the assurance of a successful trust negotiation. In this paper, we identify the situation where sensitive information can be inferred through observing one's behavior in trust negotiation. Then we propose policy migration as one approach to preventing such inference. Compared to previously proposed approaches, policy migration has a low management overhead, and provides a nice balance between inference prevention and guarantees of success in trust establishment. We also discuss the limitations of policy migration, and possible directions for more comprehensive solutions.