Research Question There has been explosive growth in the amount of data and information available to us and about us ("big data"). Our daily activities produce vast quantities of data that provide raw material for companies to create detailed profiles about us. Companies benefit from fewer wasted marketing dollars and greater capability to make inferences about our behavior. People benefit because they receive personalized offerings and access to massive amounts of information, free of charge (a goldmine). The accumulation of data is accelerating and the types of data are varied and vast. Along with the explosion of data from company efforts, there also is an explosion of data from what people share about themselves. Although people share, they also report concerns about disclosure of personal data (privacy paradox). Thus, while there are benefits from the profusion of personal data, there are potential costs related to how people's personal data and information are used (a minefield). This paper seeks to "spark conversation" by examining recent trends in data collection and aggregation and to explore potential harms that give rise to privacy concerns and to identify costs and tradeoffs people are willing to accept to offset potential harms or to gain specific benefits. Summary of Findings Kokolakis (2015) notes that while the privacy paradox is no longer a paradox, it still remains a complex phenomenon that is not fully explained. Complicating efforts to explain this paradox are recent developments in data collection and aggregation that have become more sophisticated and less transparent (e.g., facial recognition, cognitive computing, smart algorithms). With these developments, the ability to track and monitor people without their awareness is increasing, which raises the possibility of nudging or controlling people's behaviors. This is both promising and problematic. Promising in that we see the design of personalized product offerings that anticipate our needs (e.g., a smarter Siri, Hello Barbie) and that nudge us to engage in positive behaviors (e.g., exercise). Further, smart algorithms rely on "scientific" data rather than biases or heuristics and thus eliminate the need to use social constructs to profile people and predict behavior (although some evidence may suggest otherwise). While nudging clearly can lead to positive outcomes, it may be problematic in that these same smart tools may limit our choices (e.g., high interest credit cards or mortgages). This, too, is personalization, although not at its best. Key Contributions As the field of big data changes, new developments will continue to unfold. Many of the developments reviewed in this paper rely on methods of data collection and aggregation that make it more difficult for people to balance the loss of privacy with increased personalization. This paper seeks to explore this complex phenomenon further. One of the author's starting points will be Solove's (2006) taxonomy of privacy that provides a framework for examining the range of problems that can arise from the "activities that technology enables." While his framework was designed to help understand privacy problems and protections from a legal perspective, his taxonomy provides a useful framework for examining the big data as goldmine versus minefield dichotomy. Unfettered access to people's data allows companies to build "actionable intelligence" (a goldmine); but this same access makes it more difficult for people to assess the costs of the use of their data (a minefield). Use of big data works well when people's interests are aligned with those of the data collector. However, if not aligned, this raises the question of whose interests are served, particularly if people are unaware of the data collection. This paper seeks to better understand this relationship. [ABSTRACT FROM AUTHOR]