Skip to content
You are not logged in |Login  

LEADER 00000cam a2200529 i 4500 
001    on1233266753 
003    OCoLC 
005    20240117115625.0 
008    210129t20212020nyu      b    001 0 eng d 
020    9780393868333|q(paperback) 
020    0393868338|q(paperback) 
035    (OCoLC)1233266753 
040    YDX|beng|cYDX|dBDX|dIBI|dOCLCF|dOCLCO|dOCLCQ|dNJR|dOCLCL
       |dGPI 
049    GPIA 
050  4 Q334.7|b.C47 2021 
082 04 174/.90063|qOCoLC|223/eng/20230216 
100 1  Christian, Brian,|d1984-|eauthor.|1https://id.oclc.org/
       worldcat/entity/E39PBJrrwfrFjRwK7FhWt7WqwC 
245 14 The alignment problem :|bmachine learning and human values
       /|cBrian Christian. 
246 30 Machine learning and human values 
264  1 New York, NY :|bW.W. Norton & Company,|c[2021] 
264  4 |c©2020 
300    xvi, 476 pages ;|c21 cm 
336    text|btxt|2rdacontent 
337    unmediated|bn|2rdamedia 
338    volume|bnc|2rdacarrier 
386    |mGender group:|ngdr|aMen|2lcdgt 
386    |mNationality/regional group:|nnat|aCalifornians|2lcdgt 
386    |mOccupational/field of activity group:|nocc|aScholars
       |2lcdgt 
504    Includes bibliographical references (pages [401]-451) and 
       index. 
505 00 |gI.|tPROPHECY. --|g1.|tRepresentation --|g2.|tFairness --
       |g3.|tTransparency --|gII.|tAGENCY. --|g4.|tReinforcement 
       --|g5.|tShaping --|g6.|tCuriosity --|gIII.|tNORMATIVITY. -
       -|g7.|tImitation --|g8.|tInference --|g9.|tUncertainty -- 
       Conclusion. 
520    "A jaw-dropping exploration of everything that goes wrong 
       when we build AI systems-and the movement to fix them. 
       Today's "machine-learning" systems, trained by data, are 
       so effective that we've invited them to see and hear for 
       us-and to make decisions on our behalf. But alarm bells 
       are ringing. Systems cull résumés until, years later, we 
       discover that they have inherent gender biases. Algorithms
       decide bail and parole-and appear to assess black and 
       white defendants differently. We can no longer assume that
       our mortgage application, or even our medical tests, will 
       be seen by human eyes. And autonomous vehicles on our 
       streets can injure or kill. When systems we attempt to 
       teach will not, in the end, do what we want or what we 
       expect, ethical and potentially existential risks emerge. 
       Researchers call this the alignment problem. In best-
       selling author Brian Christian's riveting account, we meet
       the alignment problem's "first-responders," and learn 
       their ambitious plan to solve it before our hands are 
       completely off the wheel"--|cProvided by publisher. 
650  0 Artificial intelligence|xMoral and ethical aspects. 
650  0 Artificial intelligence|xSocial aspects. 
650  0 Machine learning|xSafety measures. 
650  0 Software failures. 
650  0 Social values. 
650  2 Social Values|0(DNLM)D012945 
650  7 Artificial intelligence|xMoral and ethical aspects.|2fast
       |0(OCoLC)fst00817273 
650  7 Artificial intelligence|xSocial aspects.|2fast
       |0(OCoLC)fst00817279 
650  7 Social values.|2fast|0(OCoLC)fst01123424 
650  7 Software failures.|2fast|0(OCoLC)fst01124200 
994    C0|bGPI 
Location Call No. Status
 New Britain, Main Library - Non Fiction  006.3 CHR    DUE 04-20-24