{"search_session":{},"preferences":{"l":"en","queryLanguage":"en"},"patentId":"146-532-209-798-204","frontPageModel":{"patentViewModel":{"ref":{"entityRefId":"146-532-209-798-204","entityRefType":"PATENT"},"entityMetadata":{"linkedIds":{"empty":true},"tags":[],"collections":[{"id":8758,"type":"PATENT","title":"California Institute of Technology","description":"","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":14069,"tags":[],"user":{"id":91044780,"username":"Cambialens","firstName":"","lastName":"","created":"2015-05-04T00:55:26.000Z","displayName":"Cambialens","preferences":"{\"usage\":\"public\",\"beta\":false}","accountType":"PERSONAL","isOauthOnly":false},"notes":[{"id":8202,"type":"COLLECTION","user":{"id":91044780,"username":"Cambialens","firstName":"","lastName":"","created":"2015-05-04T00:55:26.000Z","displayName":"Cambialens","preferences":"{\"usage\":\"public\",\"beta\":false}","accountType":"PERSONAL","isOauthOnly":false},"text":"
Searched: California Institute of Technology
","published":true,"created":"2015-11-19T02:35:22Z","updated":"2017-08-07T00:27:53Z","sharedType":"PUBLISHED","collectionId":8758,"collectionTitle":"California Institute of Technology","collectionType":"PATENT"}],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2015-11-19T02:35:22Z","updated":"2018-03-15T04:30:55Z","lastEventDate":"2018-03-15T04:30:55Z"},{"id":8760,"type":"PATENT","title":"California Institute of Technology","description":"Search applicant and Owners= \"California Inst Tech\", \"California Institute of Technology\", \" Inst California Tech*\".
Select more for logical variants
Add to collection
Total patents: more than 10k.
\n","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":9988,"tags":[],"user":{"id":91044780,"username":"Cambialens","firstName":"","lastName":"","created":"2015-05-04T00:55:26.000Z","displayName":"Cambialens","preferences":"{\"usage\":\"public\",\"beta\":false}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2015-11-19T03:31:37Z","updated":"2015-12-07T02:37:23Z","lastEventDate":"2015-12-07T02:37:23Z"},{"id":22721,"type":"PATENT","title":"Citing CNRS publications","description":"Patent documents citing scholarly work of CNRS","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":129822,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:38:15Z","updated":"2017-08-07T04:38:15Z","lastEventDate":"2017-08-07T04:38:15Z"},{"id":22732,"type":"PATENT","title":"Citing UC Berkeley publications","description":"Patent documents citing scholarly work of UC Berkeley","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":71443,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:43:57Z","updated":"2017-08-07T04:43:57Z","lastEventDate":"2017-08-07T04:43:57Z"},{"id":22733,"type":"PATENT","title":"Citing Univ Dundee publications","description":"Patent documents citing scholarly work of Univ Dundee","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":10319,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:44:56Z","updated":"2017-08-07T04:44:56Z","lastEventDate":"2017-08-07T04:44:56Z"},{"id":22734,"type":"PATENT","title":"Citing WIS publications","description":"Patent documents citing scholarly work of WIS","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":22444,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:45:06Z","updated":"2017-08-07T04:45:06Z","lastEventDate":"2017-08-07T04:45:06Z"},{"id":22745,"type":"PATENT","title":"Citing Washington Univ St Louis publications","description":"Patent documents citing scholarly work of Washington Univ St Louis","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":53436,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:48:14Z","updated":"2017-08-07T04:48:14Z","lastEventDate":"2017-08-07T04:48:14Z"},{"id":22750,"type":"PATENT","title":"Citing Rutgers Univ publications","description":"Patent documents citing scholarly work of Rutgers Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":35383,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:50:09Z","updated":"2017-08-07T04:50:09Z","lastEventDate":"2017-08-07T04:50:09Z"},{"id":22756,"type":"PATENT","title":"Citing SUNYS publications","description":"Patent documents citing scholarly work of SUNYS","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":51828,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:52:17Z","updated":"2017-08-07T04:52:17Z","lastEventDate":"2017-08-07T04:52:17Z"},{"id":22761,"type":"PATENT","title":"Citing Carnegie Mellon Univ publications","description":"Patent documents citing scholarly work of Carnegie Mellon Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":16967,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:54:42Z","updated":"2017-08-07T04:54:42Z","lastEventDate":"2017-08-07T04:54:42Z"},{"id":22764,"type":"PATENT","title":"Citing Univ Paris Descartes publications","description":"Patent documents citing scholarly work of Univ Paris Descartes","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":10846,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:55:45Z","updated":"2017-08-07T04:55:45Z","lastEventDate":"2017-08-07T04:55:45Z"},{"id":22765,"type":"PATENT","title":"Citing Moscow State Univ publications","description":"Patent documents citing scholarly work of Moscow State Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":6231,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:55:55Z","updated":"2017-08-07T04:55:55Z","lastEventDate":"2017-08-07T04:55:55Z"},{"id":22766,"type":"PATENT","title":"Citing McGill Univ publications","description":"Patent documents citing scholarly work of McGill Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":33777,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:56:02Z","updated":"2017-08-07T04:56:02Z","lastEventDate":"2017-08-07T04:56:02Z"},{"id":22771,"type":"PATENT","title":"Citing Univ Southern California publications","description":"Patent documents citing scholarly work of Univ Southern California","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":40914,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:57:43Z","updated":"2017-08-07T04:57:43Z","lastEventDate":"2017-08-07T04:57:43Z"},{"id":22773,"type":"PATENT","title":"Citing Univ Washington publications","description":"Patent documents citing scholarly work of Univ Washington","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":75126,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:58:47Z","updated":"2017-08-07T04:58:47Z","lastEventDate":"2017-08-07T04:58:47Z"},{"id":22779,"type":"PATENT","title":"Citing UBC publications","description":"Patent documents citing scholarly work of UBC","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":34693,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:01:43Z","updated":"2017-08-07T05:01:43Z","lastEventDate":"2017-08-07T05:01:43Z"},{"id":22782,"type":"PATENT","title":"Citing ETH Zurich publications","description":"Patent documents citing scholarly work of ETH Zurich","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":28682,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:02:38Z","updated":"2017-08-07T05:02:38Z","lastEventDate":"2017-08-07T05:02:38Z"},{"id":22784,"type":"PATENT","title":"Citing UC Irvine publications","description":"Patent documents citing scholarly work of UC Irvine","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":26448,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:03:21Z","updated":"2017-08-07T05:03:21Z","lastEventDate":"2017-08-07T05:03:21Z"},{"id":22786,"type":"PATENT","title":"Citing Duke Univ publications","description":"Patent documents citing scholarly work of Duke Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":52418,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:04:02Z","updated":"2017-08-07T05:04:02Z","lastEventDate":"2017-08-07T05:04:02Z"},{"id":22790,"type":"PATENT","title":"Citing MIT publications","description":"Patent documents citing scholarly work of MIT","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":143355,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:05:29Z","updated":"2019-11-29T02:59:34Z","lastEventDate":"2019-11-29T02:59:34Z"},{"id":22796,"type":"PATENT","title":"Citing Univ Oregon publications","description":"Patent documents citing scholarly work of Univ Oregon","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":6250,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:08:57Z","updated":"2017-08-07T05:08:57Z","lastEventDate":"2017-08-07T05:08:57Z"},{"id":22799,"type":"PATENT","title":"Citing Univ Alabama System publications","description":"Patent documents citing scholarly work of Univ Alabama System","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":32860,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:09:26Z","updated":"2017-08-07T05:09:26Z","lastEventDate":"2017-08-07T05:09:26Z"},{"id":22802,"type":"PATENT","title":"Citing UC System publications","description":"Patent documents citing scholarly work of UC System","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":266608,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:10:38Z","updated":"2017-08-07T05:10:38Z","lastEventDate":"2017-08-07T05:10:38Z"},{"id":22805,"type":"PATENT","title":"Citing UC San Francisco publications","description":"Patent documents citing scholarly work of UC San Francisco","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":73241,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:15:33Z","updated":"2017-08-07T05:15:33Z","lastEventDate":"2017-08-07T05:15:33Z"},{"id":22806,"type":"PATENT","title":"Citing Durham Univ publications","description":"Patent documents citing scholarly work of Durham Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":5992,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:16:35Z","updated":"2017-08-07T05:16:35Z","lastEventDate":"2017-08-07T05:16:35Z"},{"id":22819,"type":"PATENT","title":"Citing Korea Univ publications","description":"Patent documents citing scholarly work of Korea Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":6825,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:20:39Z","updated":"2017-08-07T05:20:39Z","lastEventDate":"2017-08-07T05:20:39Z"},{"id":22820,"type":"PATENT","title":"Citing Univ Groningen publications","description":"Patent documents citing scholarly work of Univ Groningen","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":21355,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:20:46Z","updated":"2017-08-07T05:20:46Z","lastEventDate":"2017-08-07T05:20:46Z"},{"id":22824,"type":"PATENT","title":"Citing Rockefeller Univ publications","description":"Patent documents citing scholarly work of Rockefeller Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":32455,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:22:00Z","updated":"2017-08-07T05:22:00Z","lastEventDate":"2017-08-07T05:22:00Z"},{"id":22830,"type":"PATENT","title":"Citing Univ Pittsburgh publications","description":"Patent documents citing scholarly work of Univ Pittsburgh","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":39162,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:24:02Z","updated":"2017-08-07T05:24:02Z","lastEventDate":"2017-08-07T05:24:02Z"},{"id":22848,"type":"PATENT","title":"Citing Stanford Univ publications","description":"Patent documents citing scholarly work of Stanford Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":108345,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:31:21Z","updated":"2017-08-07T05:31:21Z","lastEventDate":"2017-08-07T05:31:21Z"},{"id":22852,"type":"PATENT","title":"Citing Univ Oxford publications","description":"Patent documents citing scholarly work of Univ Oxford","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":51799,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:33:16Z","updated":"2017-08-07T05:33:16Z","lastEventDate":"2017-08-07T05:33:16Z"},{"id":22855,"type":"PATENT","title":"Citing University College London (UCL) publications","description":"Patent documents citing scholarly work of University College London (UCL)","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":53487,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:34:28Z","updated":"2017-08-07T05:34:28Z","lastEventDate":"2017-08-07T05:34:28Z"},{"id":22856,"type":"PATENT","title":"Citing NIH publications","description":"Patent documents citing scholarly work of NIH","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":151649,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:35:15Z","updated":"2017-08-07T05:35:15Z","lastEventDate":"2017-08-07T05:35:15Z"},{"id":22857,"type":"PATENT","title":"Citing Univ Bristol publications","description":"Patent documents citing scholarly work of Univ Bristol","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":13107,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:37:22Z","updated":"2017-08-07T05:37:22Z","lastEventDate":"2017-08-07T05:37:22Z"},{"id":22859,"type":"PATENT","title":"Citing Harvard Univ publications","description":"Patent documents citing scholarly work of Harvard Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":175224,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:37:44Z","updated":"2017-08-07T05:37:44Z","lastEventDate":"2017-08-07T05:37:44Z"},{"id":22861,"type":"PATENT","title":"Citing Caltech publications","description":"Patent documents citing scholarly work of Caltech","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":38468,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:41:12Z","updated":"2017-08-07T05:41:12Z","lastEventDate":"2017-08-07T05:41:12Z"},{"id":22869,"type":"PATENT","title":"Citing Univ Tokyo publications","description":"Patent documents citing scholarly work of Univ Tokyo","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":63341,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:43:41Z","updated":"2017-08-07T05:43:41Z","lastEventDate":"2017-08-07T05:43:41Z"},{"id":22870,"type":"PATENT","title":"Citing UC Santa Cruz publications","description":"Patent documents citing scholarly work of UC Santa Cruz","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":6086,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:44:35Z","updated":"2017-08-07T05:44:35Z","lastEventDate":"2017-08-07T05:44:35Z"},{"id":22878,"type":"PATENT","title":"Citing Univ London publications","description":"Patent documents citing scholarly work of Univ London","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":96093,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:47:35Z","updated":"2017-08-07T05:47:35Z","lastEventDate":"2017-08-07T05:47:35Z"},{"id":22881,"type":"PATENT","title":"Citing Univ Geneva publications","description":"Patent documents citing scholarly work of Univ Geneva","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":23097,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:49:38Z","updated":"2017-08-07T05:49:38Z","lastEventDate":"2017-08-07T05:49:38Z"},{"id":22884,"type":"PATENT","title":"Citing Max Planck Society publications","description":"Patent documents citing scholarly work of Max Planck Society","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":71114,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:50:10Z","updated":"2017-08-07T05:50:10Z","lastEventDate":"2017-08-07T05:50:10Z"},{"id":22895,"type":"PATENT","title":"Citing UC Santa Barbara publications","description":"Patent documents citing scholarly work of UC Santa Barbara","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":21188,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:55:29Z","updated":"2017-08-07T05:55:29Z","lastEventDate":"2017-08-07T05:55:29Z"},{"id":22899,"type":"PATENT","title":"Citing Univ Rochester publications","description":"Patent documents citing scholarly work of Univ Rochester","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":24884,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:56:06Z","updated":"2017-08-07T05:56:06Z","lastEventDate":"2017-08-07T05:56:06Z"},{"id":22901,"type":"PATENT","title":"Citing Johns Hopkins Univ publications","description":"Patent documents citing scholarly work of Johns Hopkins Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":76487,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:56:37Z","updated":"2017-08-07T05:56:37Z","lastEventDate":"2017-08-07T05:56:37Z"},{"id":22911,"type":"PATENT","title":"Citing Univ Cambridge publications","description":"Patent documents citing scholarly work of Univ Cambridge","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":58870,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T06:00:47Z","updated":"2017-08-07T06:00:47Z","lastEventDate":"2017-08-07T06:00:47Z"}],"notes":[],"inventorships":[],"privateCollections":[],"publicCollections":[{"id":8758,"type":"PATENT","title":"California Institute of Technology","description":"","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":14069,"tags":[],"user":{"id":91044780,"username":"Cambialens","firstName":"","lastName":"","created":"2015-05-04T00:55:26.000Z","displayName":"Cambialens","preferences":"{\"usage\":\"public\",\"beta\":false}","accountType":"PERSONAL","isOauthOnly":false},"notes":[{"id":8202,"type":"COLLECTION","user":{"id":91044780,"username":"Cambialens","firstName":"","lastName":"","created":"2015-05-04T00:55:26.000Z","displayName":"Cambialens","preferences":"{\"usage\":\"public\",\"beta\":false}","accountType":"PERSONAL","isOauthOnly":false},"text":"Searched: California Institute of Technology
","published":true,"created":"2015-11-19T02:35:22Z","updated":"2017-08-07T00:27:53Z","sharedType":"PUBLISHED","collectionId":8758,"collectionTitle":"California Institute of Technology","collectionType":"PATENT"}],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2015-11-19T02:35:22Z","updated":"2018-03-15T04:30:55Z","lastEventDate":"2018-03-15T04:30:55Z"},{"id":8760,"type":"PATENT","title":"California Institute of Technology","description":"Search applicant and Owners= \"California Inst Tech\", \"California Institute of Technology\", \" Inst California Tech*\".
Select more for logical variants
Add to collection
Total patents: more than 10k.
\n","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":9988,"tags":[],"user":{"id":91044780,"username":"Cambialens","firstName":"","lastName":"","created":"2015-05-04T00:55:26.000Z","displayName":"Cambialens","preferences":"{\"usage\":\"public\",\"beta\":false}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2015-11-19T03:31:37Z","updated":"2015-12-07T02:37:23Z","lastEventDate":"2015-12-07T02:37:23Z"},{"id":22721,"type":"PATENT","title":"Citing CNRS publications","description":"Patent documents citing scholarly work of CNRS","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":129822,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:38:15Z","updated":"2017-08-07T04:38:15Z","lastEventDate":"2017-08-07T04:38:15Z"},{"id":22732,"type":"PATENT","title":"Citing UC Berkeley publications","description":"Patent documents citing scholarly work of UC Berkeley","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":71443,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:43:57Z","updated":"2017-08-07T04:43:57Z","lastEventDate":"2017-08-07T04:43:57Z"},{"id":22733,"type":"PATENT","title":"Citing Univ Dundee publications","description":"Patent documents citing scholarly work of Univ Dundee","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":10319,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:44:56Z","updated":"2017-08-07T04:44:56Z","lastEventDate":"2017-08-07T04:44:56Z"},{"id":22734,"type":"PATENT","title":"Citing WIS publications","description":"Patent documents citing scholarly work of WIS","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":22444,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:45:06Z","updated":"2017-08-07T04:45:06Z","lastEventDate":"2017-08-07T04:45:06Z"},{"id":22745,"type":"PATENT","title":"Citing Washington Univ St Louis publications","description":"Patent documents citing scholarly work of Washington Univ St Louis","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":53436,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:48:14Z","updated":"2017-08-07T04:48:14Z","lastEventDate":"2017-08-07T04:48:14Z"},{"id":22750,"type":"PATENT","title":"Citing Rutgers Univ publications","description":"Patent documents citing scholarly work of Rutgers Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":35383,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:50:09Z","updated":"2017-08-07T04:50:09Z","lastEventDate":"2017-08-07T04:50:09Z"},{"id":22756,"type":"PATENT","title":"Citing SUNYS publications","description":"Patent documents citing scholarly work of SUNYS","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":51828,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:52:17Z","updated":"2017-08-07T04:52:17Z","lastEventDate":"2017-08-07T04:52:17Z"},{"id":22761,"type":"PATENT","title":"Citing Carnegie Mellon Univ publications","description":"Patent documents citing scholarly work of Carnegie Mellon Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":16967,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:54:42Z","updated":"2017-08-07T04:54:42Z","lastEventDate":"2017-08-07T04:54:42Z"},{"id":22764,"type":"PATENT","title":"Citing Univ Paris Descartes publications","description":"Patent documents citing scholarly work of Univ Paris Descartes","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":10846,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:55:45Z","updated":"2017-08-07T04:55:45Z","lastEventDate":"2017-08-07T04:55:45Z"},{"id":22765,"type":"PATENT","title":"Citing Moscow State Univ publications","description":"Patent documents citing scholarly work of Moscow State Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":6231,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:55:55Z","updated":"2017-08-07T04:55:55Z","lastEventDate":"2017-08-07T04:55:55Z"},{"id":22766,"type":"PATENT","title":"Citing McGill Univ publications","description":"Patent documents citing scholarly work of McGill Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":33777,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:56:02Z","updated":"2017-08-07T04:56:02Z","lastEventDate":"2017-08-07T04:56:02Z"},{"id":22771,"type":"PATENT","title":"Citing Univ Southern California publications","description":"Patent documents citing scholarly work of Univ Southern California","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":40914,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:57:43Z","updated":"2017-08-07T04:57:43Z","lastEventDate":"2017-08-07T04:57:43Z"},{"id":22773,"type":"PATENT","title":"Citing Univ Washington publications","description":"Patent documents citing scholarly work of Univ Washington","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":75126,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T04:58:47Z","updated":"2017-08-07T04:58:47Z","lastEventDate":"2017-08-07T04:58:47Z"},{"id":22779,"type":"PATENT","title":"Citing UBC publications","description":"Patent documents citing scholarly work of UBC","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":34693,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:01:43Z","updated":"2017-08-07T05:01:43Z","lastEventDate":"2017-08-07T05:01:43Z"},{"id":22782,"type":"PATENT","title":"Citing ETH Zurich publications","description":"Patent documents citing scholarly work of ETH Zurich","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":28682,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:02:38Z","updated":"2017-08-07T05:02:38Z","lastEventDate":"2017-08-07T05:02:38Z"},{"id":22784,"type":"PATENT","title":"Citing UC Irvine publications","description":"Patent documents citing scholarly work of UC Irvine","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":26448,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:03:21Z","updated":"2017-08-07T05:03:21Z","lastEventDate":"2017-08-07T05:03:21Z"},{"id":22786,"type":"PATENT","title":"Citing Duke Univ publications","description":"Patent documents citing scholarly work of Duke Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":52418,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:04:02Z","updated":"2017-08-07T05:04:02Z","lastEventDate":"2017-08-07T05:04:02Z"},{"id":22790,"type":"PATENT","title":"Citing MIT publications","description":"Patent documents citing scholarly work of MIT","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":143355,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:05:29Z","updated":"2019-11-29T02:59:34Z","lastEventDate":"2019-11-29T02:59:34Z"},{"id":22796,"type":"PATENT","title":"Citing Univ Oregon publications","description":"Patent documents citing scholarly work of Univ Oregon","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":6250,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:08:57Z","updated":"2017-08-07T05:08:57Z","lastEventDate":"2017-08-07T05:08:57Z"},{"id":22799,"type":"PATENT","title":"Citing Univ Alabama System publications","description":"Patent documents citing scholarly work of Univ Alabama System","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":32860,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:09:26Z","updated":"2017-08-07T05:09:26Z","lastEventDate":"2017-08-07T05:09:26Z"},{"id":22802,"type":"PATENT","title":"Citing UC System publications","description":"Patent documents citing scholarly work of UC System","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":266608,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:10:38Z","updated":"2017-08-07T05:10:38Z","lastEventDate":"2017-08-07T05:10:38Z"},{"id":22805,"type":"PATENT","title":"Citing UC San Francisco publications","description":"Patent documents citing scholarly work of UC San Francisco","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":73241,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:15:33Z","updated":"2017-08-07T05:15:33Z","lastEventDate":"2017-08-07T05:15:33Z"},{"id":22806,"type":"PATENT","title":"Citing Durham Univ publications","description":"Patent documents citing scholarly work of Durham Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":5992,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:16:35Z","updated":"2017-08-07T05:16:35Z","lastEventDate":"2017-08-07T05:16:35Z"},{"id":22819,"type":"PATENT","title":"Citing Korea Univ publications","description":"Patent documents citing scholarly work of Korea Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":6825,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:20:39Z","updated":"2017-08-07T05:20:39Z","lastEventDate":"2017-08-07T05:20:39Z"},{"id":22820,"type":"PATENT","title":"Citing Univ Groningen publications","description":"Patent documents citing scholarly work of Univ Groningen","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":21355,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:20:46Z","updated":"2017-08-07T05:20:46Z","lastEventDate":"2017-08-07T05:20:46Z"},{"id":22824,"type":"PATENT","title":"Citing Rockefeller Univ publications","description":"Patent documents citing scholarly work of Rockefeller Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":32455,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:22:00Z","updated":"2017-08-07T05:22:00Z","lastEventDate":"2017-08-07T05:22:00Z"},{"id":22830,"type":"PATENT","title":"Citing Univ Pittsburgh publications","description":"Patent documents citing scholarly work of Univ Pittsburgh","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":39162,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:24:02Z","updated":"2017-08-07T05:24:02Z","lastEventDate":"2017-08-07T05:24:02Z"},{"id":22848,"type":"PATENT","title":"Citing Stanford Univ publications","description":"Patent documents citing scholarly work of Stanford Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":108345,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:31:21Z","updated":"2017-08-07T05:31:21Z","lastEventDate":"2017-08-07T05:31:21Z"},{"id":22852,"type":"PATENT","title":"Citing Univ Oxford publications","description":"Patent documents citing scholarly work of Univ Oxford","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":51799,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:33:16Z","updated":"2017-08-07T05:33:16Z","lastEventDate":"2017-08-07T05:33:16Z"},{"id":22855,"type":"PATENT","title":"Citing University College London (UCL) publications","description":"Patent documents citing scholarly work of University College London (UCL)","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":53487,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:34:28Z","updated":"2017-08-07T05:34:28Z","lastEventDate":"2017-08-07T05:34:28Z"},{"id":22856,"type":"PATENT","title":"Citing NIH publications","description":"Patent documents citing scholarly work of NIH","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":151649,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:35:15Z","updated":"2017-08-07T05:35:15Z","lastEventDate":"2017-08-07T05:35:15Z"},{"id":22857,"type":"PATENT","title":"Citing Univ Bristol publications","description":"Patent documents citing scholarly work of Univ Bristol","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":13107,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:37:22Z","updated":"2017-08-07T05:37:22Z","lastEventDate":"2017-08-07T05:37:22Z"},{"id":22859,"type":"PATENT","title":"Citing Harvard Univ publications","description":"Patent documents citing scholarly work of Harvard Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":175224,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:37:44Z","updated":"2017-08-07T05:37:44Z","lastEventDate":"2017-08-07T05:37:44Z"},{"id":22861,"type":"PATENT","title":"Citing Caltech publications","description":"Patent documents citing scholarly work of Caltech","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":38468,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:41:12Z","updated":"2017-08-07T05:41:12Z","lastEventDate":"2017-08-07T05:41:12Z"},{"id":22869,"type":"PATENT","title":"Citing Univ Tokyo publications","description":"Patent documents citing scholarly work of Univ Tokyo","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":63341,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:43:41Z","updated":"2017-08-07T05:43:41Z","lastEventDate":"2017-08-07T05:43:41Z"},{"id":22870,"type":"PATENT","title":"Citing UC Santa Cruz publications","description":"Patent documents citing scholarly work of UC Santa Cruz","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":6086,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:44:35Z","updated":"2017-08-07T05:44:35Z","lastEventDate":"2017-08-07T05:44:35Z"},{"id":22878,"type":"PATENT","title":"Citing Univ London publications","description":"Patent documents citing scholarly work of Univ London","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":96093,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:47:35Z","updated":"2017-08-07T05:47:35Z","lastEventDate":"2017-08-07T05:47:35Z"},{"id":22881,"type":"PATENT","title":"Citing Univ Geneva publications","description":"Patent documents citing scholarly work of Univ Geneva","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":23097,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:49:38Z","updated":"2017-08-07T05:49:38Z","lastEventDate":"2017-08-07T05:49:38Z"},{"id":22884,"type":"PATENT","title":"Citing Max Planck Society publications","description":"Patent documents citing scholarly work of Max Planck Society","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":71114,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:50:10Z","updated":"2017-08-07T05:50:10Z","lastEventDate":"2017-08-07T05:50:10Z"},{"id":22895,"type":"PATENT","title":"Citing UC Santa Barbara publications","description":"Patent documents citing scholarly work of UC Santa Barbara","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":21188,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:55:29Z","updated":"2017-08-07T05:55:29Z","lastEventDate":"2017-08-07T05:55:29Z"},{"id":22899,"type":"PATENT","title":"Citing Univ Rochester publications","description":"Patent documents citing scholarly work of Univ Rochester","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":24884,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:56:06Z","updated":"2017-08-07T05:56:06Z","lastEventDate":"2017-08-07T05:56:06Z"},{"id":22901,"type":"PATENT","title":"Citing Johns Hopkins Univ publications","description":"Patent documents citing scholarly work of Johns Hopkins Univ","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":76487,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T05:56:37Z","updated":"2017-08-07T05:56:37Z","lastEventDate":"2017-08-07T05:56:37Z"},{"id":22911,"type":"PATENT","title":"Citing Univ Cambridge publications","description":"Patent documents citing scholarly work of Univ Cambridge","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":58870,"tags":[],"user":{"id":233682368,"username":"tech","firstName":"The Lens","lastName":"Team","created":"2017-08-06T20:11:49.000Z","displayName":"The Lens Team","profilePictureKey":"lens/users/15eac2a0-031d-4923-92cb-a162e1cb2bbb/profile-picture","preferences":"{\"beta\":true}","accountType":"PERSONAL","isOauthOnly":false},"notes":[],"sharedType":"PUBLISHED","hasLinkedSavedQueries":false,"savedQueries":[],"created":"2017-08-07T06:00:47Z","updated":"2017-08-07T06:00:47Z","lastEventDate":"2017-08-07T06:00:47Z"}],"privateNotes":[],"landscapeCollections":[],"landscapeNotes":[]},"document":{"record_lens_id":"146-532-209-798-204","lens_id":["146-532-209-798-204","037-714-317-222-638"],"doc_key":"US_8649606_B2_20140211","created":"2016-01-15T08:22:39.613","docdb_id":414854101,"lens_internal":{"earliest_lens_id_created_time":"2016-01-15T08:22:39.613","last_modified":"2024-03-25T22:22:45.571","legacy_pub_key":"US_8649606_B2","has_doc_lang":true,"has_biblio_lang":true,"has_all_title_lang":true,"has_all_abstract_lang":true,"has_all_claims_lang":true,"has_description_lang":true},"jurisdiction":"US","doc_number":"8649606","kind":"B2","date_published":"2014-02-11","year_published":2014,"ids":["US_8649606_B2","146-532-209-798-204","037-714-317-222-638","US_8649606_B2_20140211","US","8649606","B2","US8649606B2","US8649606","8649606B2"],"lang":"en","publication_type":"GRANTED_PATENT","application_reference":{"jurisdiction":"US","doc_number":"201113025118","kind":"A","date":"2011-02-10"},"priority_claim":[{"jurisdiction":"US","doc_number":"201113025118","kind":"A","date":"2011-02-10"},{"jurisdiction":"US","doc_number":"30322510","kind":"P","date":"2010-02-10"}],"priority_claim.source":"DOCDB","earliest_priority_claim_date":"2010-02-10","title":{"en":[{"text":"Methods and systems for generating saliency models through linear and/or nonlinear integration","lang":"en","source":"DOCDB","data_format":"DOCDBA"}]},"title_lang":["en"],"has_title":true,"applicant":[{"name":"ZHAO QI","residence":"US","sequence":1,"app_type":"applicant"},{"name":"KOCH CHRISTOF","residence":"US","sequence":2,"app_type":"applicant"},{"name":"CALIFORNIA INST OF TECHN","residence":"US","sequence":3,"app_type":"applicant"}],"applicant_count":3,"has_applicant":true,"inventor":[{"name":"ZHAO QI","residence":"US","sequence":1},{"name":"KOCH CHRISTOF","residence":"US","sequence":2}],"inventor_count":2,"has_inventor":true,"agent":[{"name":"Steinfl & Bruno, LLP","sequence":1}],"agent_count":1,"has_agent":true,"owner":[{"name":"CALIFORNIA INSTITUTE OF TECHNOLOGY","address":"1200 E. CALIFORNIA BLVD., MC 210-85, PASADENA, CALIFORNIA, 91125","sequence":1,"recorded_date":"2011-06-03","execution_date":"2011-04-22","is_current_owner":true}],"owner_count":1,"owner_all":[{"name":"CALIFORNIA INSTITUTE OF TECHNOLOGY","address":"1200 E. CALIFORNIA BLVD., MC 210-85, PASADENA, CALIFORNIA, 91125","sequence":1,"recorded_date":"2011-06-03","execution_date":"2011-04-22","is_current_owner":true}],"owner_all_count":1,"has_owner":true,"primary_examiner":{"name":"Chan S Park","department":"2669"},"assistant_examiner":{"name":"Mia M Thomas"},"has_examiner":true,"class_ipcr":[{"symbol":"A61N1/00","version_indicator":"2006-01-01","class_symbol_position":"L","class_value":"I","action_date":"2014-02-11","class_status":"B","class_data_source":"H","generating_office":"US","sequence":1},{"symbol":"G06V10/774","version_indicator":"2022-01-01","class_symbol_position":"L","class_value":"I","action_date":"2023-06-29","class_status":"R","class_data_source":"H","generating_office":"EP","sequence":2}],"class_ipcr.later_symbol":["A61N1/00","G06V10/774"],"class_ipcr.inv_symbol":["A61N1/00","G06V10/774"],"class_ipcr.add_symbol":[],"class_ipcr.source":"DOCDB","class_cpc":[{"symbol":"G06V10/451","version_indicator":"2022-01-01","class_symbol_position":"F","class_value":"I","action_date":"2022-12-22","class_status":"R","class_data_source":"H","generating_office":"EP","sequence":1},{"symbol":"G06V10/462","version_indicator":"2022-01-01","class_symbol_position":"L","class_value":"I","action_date":"2022-12-16","class_status":"R","class_data_source":"H","generating_office":"EP","sequence":2},{"symbol":"G06V10/774","version_indicator":"2022-01-01","class_symbol_position":"L","class_value":"I","action_date":"2022-12-16","class_status":"R","class_data_source":"H","generating_office":"EP","sequence":3},{"symbol":"G06V10/462","version_indicator":"2022-01-01","class_symbol_position":"L","class_value":"I","action_date":"2022-01-01","class_status":"R","class_data_source":"C","generating_office":"US","sequence":4},{"symbol":"G06V10/451","version_indicator":"2022-01-01","class_symbol_position":"F","class_value":"I","action_date":"2022-01-01","class_status":"R","class_data_source":"C","generating_office":"US","sequence":5},{"symbol":"G06F18/214","version_indicator":"2023-01-01","class_symbol_position":"L","class_value":"I","action_date":"2023-01-01","class_status":"R","class_data_source":"C","generating_office":"US","sequence":6},{"symbol":"G06V10/774","version_indicator":"2022-01-01","class_symbol_position":"L","class_value":"I","action_date":"2023-01-11","class_status":"R","class_data_source":"H","generating_office":"US","sequence":7}],"class_cpc_cset":[],"class_cpc.first_symbol":"G06V10/451","class_cpc.later_symbol":["G06V10/462","G06V10/774","G06V10/462","G06F18/214","G06V10/774"],"class_cpc.inv_symbol":["G06V10/451","G06V10/462","G06V10/774","G06V10/462","G06V10/451","G06F18/214","G06V10/774"],"class_cpc.add_symbol":[],"class_cpc.source":"DOCDB","class_national":[{"symbol":"382/195","symbol_position":"F"},{"symbol":"382/159","symbol_position":"L"},{"symbol":"382/165","symbol_position":"L"},{"symbol":"607/54","symbol_position":"L"}],"class_national.first_symbol":"382/195","class_national.later_symbol":["382/159","382/165","607/54"],"class_national.source":"DOCDB","reference_cited":[{"patent":{"num":1,"document_id":{"jurisdiction":"US","doc_number":"5983218","kind":"A","date":"1999-11-09","name":"SYEDA-MAHMOOD TANVEER F [US]"},"lens_id":"092-002-218-909-601","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":1}},{"patent":{"num":2,"document_id":{"jurisdiction":"US","doc_number":"6104858","kind":"A","date":"2000-08-15","name":"SUZUKI KOJI [JP]"},"lens_id":"039-477-677-949-044","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":2}},{"patent":{"num":3,"document_id":{"jurisdiction":"US","doc_number":"6720997","kind":"B1","date":"2004-04-13","name":"HORIE DAISAKU [JP], et al"},"lens_id":"093-843-050-121-513","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":3}},{"patent":{"num":4,"document_id":{"jurisdiction":"US","doc_number":"7027054","kind":"B1","date":"2006-04-11","name":"CHEIKY MICHAEL [US], et al"},"lens_id":"193-997-904-304-784","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":4}},{"patent":{"num":5,"document_id":{"jurisdiction":"US","doc_number":"7620267","kind":"B2","date":"2009-11-17","name":"WIDDOWSON SIMON [US]"},"lens_id":"083-287-414-948-013","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":5}},{"patent":{"num":6,"document_id":{"jurisdiction":"US","doc_number":"7848596","kind":"B2","date":"2010-12-07","name":"WIDDOWSON SIMON [US]"},"lens_id":"085-372-363-432-226","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":6}},{"patent":{"num":7,"document_id":{"jurisdiction":"US","doc_number":"8396249","kind":"B1","date":"2013-03-12","name":"KHOSLA DEEPAK [US], et al"},"lens_id":"057-880-964-675-448","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":7}},{"patent":{"num":8,"document_id":{"jurisdiction":"US","doc_number":"2002081033","kind":"A1","date":"2002-06-27","name":"STENTIFORD FREDERICK W M [GB]"},"lens_id":"021-545-019-349-061","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":8}},{"patent":{"num":9,"document_id":{"jurisdiction":"US","doc_number":"2002126891","kind":"A1","date":"2002-09-12","name":"OSBERGER WILFRIED M [US]"},"lens_id":"138-823-749-235-39X","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":9}},{"patent":{"num":10,"document_id":{"jurisdiction":"US","doc_number":"2002154833","kind":"A1","date":"2002-10-24","name":"KOCH CHRISTOF [US], et al"},"lens_id":"062-707-610-595-450","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":10}},{"patent":{"num":11,"document_id":{"jurisdiction":"US","doc_number":"2005047647","kind":"A1","date":"2005-03-03","name":"RUTISHAUSER UELI [US], et al"},"lens_id":"071-390-063-284-568","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":11}},{"patent":{"num":12,"document_id":{"jurisdiction":"US","doc_number":"2005069206","kind":"A1","date":"2005-03-31","name":"MA YU-FEI [CN], et al"},"lens_id":"168-785-977-111-595","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":12}},{"patent":{"num":13,"document_id":{"jurisdiction":"US","doc_number":"2005084136","kind":"A1","date":"2005-04-21","name":"XIE XING [CN], et al"},"lens_id":"125-048-216-873-245","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":13}},{"patent":{"num":14,"document_id":{"jurisdiction":"US","doc_number":"2005163344","kind":"A1","date":"2005-07-28","name":"KAYAHARA NAOKI [JP], et al"},"lens_id":"031-632-028-684-820","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":14}},{"patent":{"num":15,"document_id":{"jurisdiction":"US","doc_number":"2006215922","kind":"A1","date":"2006-09-28","name":"KOCH CHRISTOF [US], et al"},"lens_id":"076-452-041-378-128","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":15}},{"patent":{"num":16,"document_id":{"jurisdiction":"US","doc_number":"2007183669","kind":"A1","date":"2007-08-09","name":"OWECHKO YURI [US], et al"},"lens_id":"095-261-505-755-882","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":16}},{"patent":{"num":17,"document_id":{"jurisdiction":"US","doc_number":"2009226057","kind":"A1","date":"2009-09-10","name":"MASHIACH ADI [IL], et al"},"lens_id":"132-347-042-328-282","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":17}},{"patent":{"num":18,"document_id":{"jurisdiction":"US","doc_number":"2009245625","kind":"A1","date":"2009-10-01","name":"IWAKI YASUHARU [JP], et al"},"lens_id":"106-159-548-671-888","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":18}},{"patent":{"num":19,"document_id":{"jurisdiction":"US","doc_number":"2010268301","kind":"A1","date":"2010-10-21","name":"PARIKH NEHA J [US], et al"},"lens_id":"097-617-260-354-278","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":19}},{"patent":{"num":20,"document_id":{"jurisdiction":"US","doc_number":"2011175904","kind":"A1","date":"2011-07-21","name":"VAN BAAR JEROEN [CH], et al"},"lens_id":"047-821-478-858-158","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":20}},{"patent":{"num":21,"document_id":{"jurisdiction":"US","doc_number":"2013084013","kind":"A1","date":"2013-04-04","name":"TANG HAO [US]"},"lens_id":"017-289-676-092-310","category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[],"sequence":21}},{"npl":{"num":1,"text":"Itti et al. \"A Model of Saliency Based Visual Attention for Rapid Scene Analysis\" IEEE Transactions on Pattern Analysis and Machine Intelligence vol. 20, No. 11, Nov. 1998 pp. 1254-1259.","npl_type":"a","external_id":["10.1109/34.730558"],"record_lens_id":"096-717-380-007-985","lens_id":["135-400-810-419-62X","096-717-380-007-985"],"sequence":22,"category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[]}},{"npl":{"num":2,"text":"Tatler et al. \"Visual Correlates of Fixation Selection: effects of scale and time\" Vision Research Elsevier Nov. 2003 pp. 1-17.","npl_type":"a","external_id":[],"lens_id":[],"sequence":23,"category":[],"us_category":[],"cited_phase":"SEA","rel_claims":[]}},{"patent":{"num":1,"document_id":{"jurisdiction":"US","doc_number":"5210799","kind":"A","date":"1993-05-11","name":"RAO KASHI [US]"},"lens_id":"067-604-352-569-793","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":1}},{"patent":{"num":2,"document_id":{"jurisdiction":"US","doc_number":"5357194","kind":"A","date":"1994-10-18","name":"ULLMAN SHIMON [IL], et al"},"lens_id":"065-956-500-557-492","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":2}},{"patent":{"num":3,"document_id":{"jurisdiction":"US","doc_number":"5566246","kind":"A","date":"1996-10-15","name":"RAO KASHI [US]"},"lens_id":"130-328-608-126-289","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":3}},{"patent":{"num":4,"document_id":{"jurisdiction":"US","doc_number":"5598355","kind":"A","date":"1997-01-28","name":"DEROU DOMINIQUE [FR], et al"},"lens_id":"014-433-581-246-422","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":4}},{"patent":{"num":5,"document_id":{"jurisdiction":"US","doc_number":"5801810","kind":"A","date":"1998-09-01","name":"ROENKER DANIEL L [US]"},"lens_id":"155-791-581-711-948","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":5}},{"patent":{"num":6,"document_id":{"jurisdiction":"US","doc_number":"5929849","kind":"A","date":"1999-07-27","name":"KIKINIS DAN [US]"},"lens_id":"080-807-676-640-059","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":6}},{"patent":{"num":7,"document_id":{"jurisdiction":"US","doc_number":"5984475","kind":"A","date":"1999-11-16","name":"GALIANA HENRIETTA L [CA], et al"},"lens_id":"109-499-344-121-761","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":7}},{"patent":{"num":8,"document_id":{"jurisdiction":"US","doc_number":"6045226","kind":"A","date":"2000-04-04","name":"CLAESSENS DOMINIQUE PAUL GERAR [CH]"},"lens_id":"168-373-672-546-343","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":8}},{"patent":{"num":9,"document_id":{"jurisdiction":"US","doc_number":"6320976","kind":"B1","date":"2001-11-20","name":"MURTHY SREERAMA K V [IN], et al"},"lens_id":"199-698-108-491-812","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":9}},{"patent":{"num":10,"document_id":{"jurisdiction":"US","doc_number":"6442203","kind":"B1","date":"2002-08-27","name":"DEMOS GARY A [US]"},"lens_id":"075-906-914-584-172","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":10}},{"patent":{"num":11,"document_id":{"jurisdiction":"US","doc_number":"6670963","kind":"B2","date":"2003-12-30","name":"OSBERGER WILFRIED M [US]"},"lens_id":"169-388-957-464-34X","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":11}},{"patent":{"num":12,"document_id":{"jurisdiction":"US","doc_number":"2009092314","kind":"A1","date":"2009-04-09","name":"VARSHNEY AMITABH [US], et al"},"lens_id":"068-061-378-438-212","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":12}},{"patent":{"num":13,"document_id":{"jurisdiction":"US","doc_number":"2009110269","kind":"A1","date":"2009-04-30","name":"LE MEUR OLIVIER [FR], et al"},"lens_id":"079-196-963-058-242","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":13}},{"patent":{"num":14,"document_id":{"jurisdiction":"US","doc_number":"2009112287","kind":"A1","date":"2009-04-30","name":"GREENBERG ROBERT J [US], et al"},"lens_id":"101-873-795-955-614","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":14}},{"patent":{"num":15,"document_id":{"jurisdiction":"CN","doc_number":"101489139","kind":"A","date":"2009-07-22","name":"UNIV BEIJING [CN]"},"lens_id":"156-983-598-601-633","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":15}},{"patent":{"num":16,"document_id":{"jurisdiction":"CN","doc_number":"101601070","kind":"A","date":"2009-12-09","name":"THOMSON LICENSING SA [FR]"},"lens_id":"045-879-028-602-226","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":16}},{"patent":{"num":17,"document_id":{"jurisdiction":"CN","doc_number":"101697593","kind":"A","date":"2010-04-21","name":"UNIV WUHAN"},"lens_id":"146-649-347-931-324","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":17}},{"patent":{"num":18,"document_id":{"jurisdiction":"EP","doc_number":"2034439","kind":"A1","date":"2009-03-11","name":"THOMSON LICENSING [FR]"},"lens_id":"098-394-304-026-761","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":18}},{"patent":{"num":19,"document_id":{"jurisdiction":"EP","doc_number":"2136876","kind":"A2","date":"2009-12-30","name":"SECOND SIGHT MEDICAL PROD INC [US], et al"},"lens_id":"152-388-057-973-520","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":19}},{"patent":{"num":20,"document_id":{"jurisdiction":"KR","doc_number":"20080030604","kind":"A","date":"2008-04-04","name":"THOMSON LICENSING [FR]"},"lens_id":"140-053-238-330-233","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":20}},{"patent":{"num":21,"document_id":{"jurisdiction":"KR","doc_number":"20100013402","kind":"A","date":"2010-02-10","name":"UNIV CHUNG ANG IND [KR]"},"lens_id":"159-765-097-679-671","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":21}},{"patent":{"num":22,"document_id":{"jurisdiction":"WO","doc_number":"2007088193","kind":"A1","date":"2007-08-09","name":"THOMSON LICENSING [FR], et al"},"lens_id":"026-653-778-310-256","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":22}},{"patent":{"num":23,"document_id":{"jurisdiction":"WO","doc_number":"2008043204","kind":"A1","date":"2008-04-17","name":"THOMSON BROADBAND R & D BEIJIN [CN], et al"},"lens_id":"033-431-832-443-747","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":23}},{"patent":{"num":24,"document_id":{"jurisdiction":"WO","doc_number":"2008040945","kind":"A1","date":"2008-04-10","name":"IMP INNOVATIONS LTD [GB], et al"},"lens_id":"121-832-363-383-004","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":24}},{"patent":{"num":25,"document_id":{"jurisdiction":"WO","doc_number":"2008109771","kind":"A2","date":"2008-09-12","name":"SECOND SIGHT MEDICAL PROD INC [US], et al"},"lens_id":"139-023-295-453-915","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":25}},{"patent":{"num":26,"document_id":{"jurisdiction":"WO","doc_number":"2010109419","kind":"A1","date":"2010-09-30","name":"KONINKL PHILIPS ELECTRONICS NV [NL], et al"},"lens_id":"064-827-880-326-161","category":[],"us_category":[],"cited_phase":"APP","rel_claims":[],"sequence":26}},{"npl":{"num":1,"text":"Andersen, R.A. (1997). Multimodal integration for the representation of space in the posterior parietal cortex. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 352, 1421-1428.","npl_type":"a","external_id":["pmc1692052","10.1098/rstb.1997.0128","9368930"],"record_lens_id":"075-954-072-388-12X","lens_id":["173-496-001-959-455","075-954-072-388-12X","186-555-081-612-636"],"sequence":27,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":2,"text":"Andersen, R. A., Bracewell, R. M., Barash, S., Gnadt, J. W., & Fogassi, L. (1990). Eye position effects on visual, memory, and saccade-related activity in areas LIP and 7A of macaque. Journal of Neuroscience, 10, 1176-1196.","npl_type":"a","external_id":["2329374","pmc6570201","10.1523/jneurosci.10-04-01176.1990"],"record_lens_id":"075-263-141-962-074","lens_id":["197-914-589-718-969","075-263-141-962-074","133-055-403-727-273"],"sequence":28,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":3,"text":"Beymer, D., & Poggio, T. (1996). Image representations for visual learning. Science, 272, 1905-1909.","npl_type":"a","external_id":["10.1126/science.272.5270.1905","8658162"],"record_lens_id":"067-806-500-091-003","lens_id":["120-885-193-506-966","067-806-500-091-003","086-260-181-000-801"],"sequence":29,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":4,"text":"Braun, J., & Sagi, D. (1990). Vision outside the focus of attention. Perception and Psychophysics, 48, 45-58.","npl_type":"a","external_id":["10.3758/bf03205010","2377439"],"record_lens_id":"054-156-914-601-612","lens_id":["194-516-496-214-742","054-156-914-601-612","067-915-051-072-888"],"sequence":30,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":5,"text":"Braun, J. (1998). Vision and attention: the role of training (letter; comment). Nature, 393, 424-425.","npl_type":"a","external_id":["10.1038/30875","9623997"],"record_lens_id":"000-338-189-148-859","lens_id":["126-954-222-808-125","000-338-189-148-859","097-066-124-115-16X"],"sequence":31,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":6,"text":"Cannon, M. W., & Fullenkamp, S. C. (1991). Spatial interactions in apparent contrast: inhibitory effects among grating patterns of different spatial frequencies, spatial positions and orientations. Vision Research, 31, 1985-1998.","npl_type":"a","external_id":["10.1016/0042-6989(91)90193-9","1771782"],"record_lens_id":"060-977-376-657-55X","lens_id":["107-408-457-220-601","060-977-376-657-55X","164-961-415-344-180"],"sequence":32,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":7,"text":"Colby, C. L., & Goldberg, M. E. (1999). Space and attention in parietal cortex. Annual Review of Neuroscience, 22, 319-349.","npl_type":"a","external_id":["10.1146/annurev.neuro.22.1.319","10202542"],"record_lens_id":"011-820-330-886-91X","lens_id":["015-757-414-648-974","011-820-330-886-91X","141-501-837-074-481"],"sequence":33,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":8,"text":"Corbetta, M. (1998). Frontoparietal cortical networks for directing attention and the eye to visual locations: identical, independent, or overlapping neural systems? Proceedings of the National Academy of Sciences of the United States of America, 95, 831-838.","npl_type":"a","external_id":["pmc33805","10.1073/pnas.95.3.831","9448248"],"record_lens_id":"052-776-814-530-452","lens_id":["129-543-886-168-135","052-776-814-530-452","081-443-731-393-962"],"sequence":34,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":9,"text":"Crick, F., & Koch, C. (1998). Constraints on cortical and thalamic projections: the no-strong-loops hypothesis. Nature, 391, 245-250.","npl_type":"a","external_id":["9440687","10.1038/34584"],"record_lens_id":"014-365-096-740-468","lens_id":["191-287-755-506-163","014-365-096-740-468","037-534-167-576-52X"],"sequence":35,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":10,"text":"Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual Review of Neuroscience, 18, 193-222.","npl_type":"a","external_id":["10.1146/annurev.neuro.18.1.193","10.1146/annurev.ne.18.030195.001205","7605061"],"record_lens_id":"077-223-748-392-772","lens_id":["163-387-311-390-051","077-223-748-392-772","160-173-181-018-346","144-704-772-812-162","115-057-510-221-856"],"sequence":36,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":11,"text":"DeValois, R. L., Albrecht, D. G., & Thorell, L. G. (1982). Spatial-frequency selectivity of cells in macaque visual cortex. Vision Research, 22, 545-559.","npl_type":"a","external_id":["10.1016/0042-6989(82)90113-4","7112954"],"record_lens_id":"041-325-258-243-919","lens_id":["133-006-749-952-928","041-325-258-243-919","119-487-167-622-50X"],"sequence":37,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":12,"text":"Driver, J., Mcleod, P., & Dienes, Z., Motion coherence and conjunction search: Implications for guided search theory. Perception and Psychophysics, 1992, 51, 79-85.","npl_type":"a","external_id":["1549427","10.3758/bf03205076"],"record_lens_id":"045-209-582-908-289","lens_id":["184-455-626-155-180","045-209-582-908-289","067-080-274-978-959"],"sequence":38,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":13,"text":"Engel, S., Zhang, X., & Wendell, B. (1997). Colour tuning in human visual cortex measured with functional magnetic resonance imaging. Nature, 388, 68-71.","npl_type":"a","external_id":["9214503","10.1038/40398"],"record_lens_id":"016-800-521-165-094","lens_id":["069-594-718-796-453","016-800-521-165-094","059-767-085-064-948"],"sequence":39,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":14,"text":"Gallant, J. L., Connor, C. E., & Essen, D. C. V. (1998). Neural activity in areas VI, V2 and V4 during free viewing of natural scenes compared to controlled viewing. Neuroreport, 9, 1673-1678.","npl_type":"a","external_id":["9631485","10.1097/00001756-199805110-00075"],"record_lens_id":"085-279-294-717-087","lens_id":["085-279-294-717-087","182-217-778-561-341"],"sequence":40,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":15,"text":"Gilbert, C. D., & Wiesel, T. N. (1983). Clustered intrinsic connections in cat visual cortex. Journal of Neuroscience, 3, 1116-1133.","npl_type":"a","external_id":["10.1523/jneurosci.03-05-01116.1983","6188819","pmc6564507"],"record_lens_id":"020-762-843-797-449","lens_id":["056-502-913-196-266","020-762-843-797-449","104-370-420-652-574"],"sequence":41,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":16,"text":"Gilbert, C. D., & Wiesel, T. N. (1989). Columnar specificity of intrinsic horizontal and corticocortical connections in cat visual cortex. Journal of Neuroscience, 9, 2432-2442.","npl_type":"a","external_id":["10.1523/jneurosci.09-07-02432.1989","pmc6569760","2746337"],"record_lens_id":"034-228-948-886-606","lens_id":["133-799-701-598-029","034-228-948-886-606","053-361-718-082-92X"],"sequence":42,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":17,"text":"Gilbert, C. D., Das, A., Ito, M., Kapadia, M., & Westheimer, G. (1996). Spatial integration and cortical dynamics. Proceedings of the National Academy of Sciences of the United States of America, 93, 615-622.","npl_type":"a","external_id":["8570604","10.1073/pnas.93.2.615","pmc40100"],"record_lens_id":"076-445-410-343-057","lens_id":["191-764-690-988-446","076-445-410-343-057","197-639-443-727-172"],"sequence":43,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":18,"text":"Gottlieb, J. P., Kusunoki, M., & Goldberg, M. E. (1998). The representation of visual salience in monkey parietal cortex. Nature, 391, 481-484.","npl_type":"a","external_id":["10.1038/35135","9461214"],"record_lens_id":"032-242-292-037-350","lens_id":["055-251-148-464-074","032-242-292-037-350","126-246-638-136-599"],"sequence":44,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":19,"text":"Greenspan, H., Belongie, S., Goodman, R., Perona, P., Rakshit, S., & Anderson, C. H. (1994). Overcomplete steerable pyramid filters and rotation invariance. In Proc. IEEE Computer Vision and Pattern Recognition (CVPR), Seattle, WA (June), 222-228.","npl_type":"a","external_id":["10.1109/cvpr.1994.323833"],"record_lens_id":"057-436-462-500-057","lens_id":["122-511-049-035-609","057-436-462-500-057"],"sequence":45,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":20,"text":"Hamker, F. H. (1999). The role of feedback connections in task driven visual search. In D. von Heinke, G. W. Humphreys, & A. Olson, Connectionist Models in Cognitive Neuroscience, Proc. of the 5th neural computation and psychology workshop (NCPW'98). London: Springer-Verlag, 252-261.","npl_type":"a","external_id":["10.1007/978-1-4471-0813-9_22"],"record_lens_id":"056-714-567-597-212","lens_id":["158-878-277-382-255","056-714-567-597-212"],"sequence":46,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":21,"text":"Hikosaka, O., Miyauchi, S., & Shimojo, S. (1996). Orienting a spatial attention-its reflexive, compensatory, and voluntary mechanisms. Brain Research and Cognitive Brain Research, 5, 1-9.","npl_type":"a","external_id":["10.1016/s0926-6410(96)00036-5","9049066"],"record_lens_id":"096-771-694-655-456","lens_id":["142-402-885-650-036","096-771-694-655-456","117-350-880-229-99X"],"sequence":47,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":22,"text":"Hillstrom, A. P., & Yantis, S. (1994). Visual motion and attentional capture. Perception and Psychophysics, 55, 399-411.","npl_type":"a","external_id":["10.3758/bf03205298","8036120"],"record_lens_id":"026-456-560-843-700","lens_id":["073-764-727-544-155","026-456-560-843-700","131-853-005-725-401"],"sequence":48,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":23,"text":"Horiuchi, T., Morris, T., Koch, C. & DeWeerth, S., 1997. Analog VLSI circuits for attention-based, visual tracking. In M. Mozer, M. Jordan, & T. Petsche, Neural information processing systems (NIPS*9) (706-712). Cambridge, MA: MIT Press.","npl_type":"a","external_id":[],"lens_id":[],"sequence":49,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":24,"text":"Horowitz, T. S., & Wolfe, J. M. (1998). Visual search has no memory. Nature, 394, 575-577.","npl_type":"a","external_id":["10.1038/29068","9707117"],"record_lens_id":"005-477-431-559-501","lens_id":["186-345-546-182-82X","005-477-431-559-501","086-135-738-989-037"],"sequence":50,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":25,"text":"Itti, L. et al., A comparison of feature combination strategies for saliency-based visual attention, in SPIE Human Vision and Electronic Imaging IV, San Jose, CA 1999 (in press), 1-10.","npl_type":"a","external_id":[],"lens_id":[],"sequence":51,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":26,"text":"Knierim, J. J., & van Essen, D. C. (1992). Neuronal responses to static texture patterns in area V1 of the alert macaque monkey. Journal of Neurophysiology, 67, 961-980.","npl_type":"a","external_id":["1588394","10.1152/jn.1992.67.4.961"],"record_lens_id":"037-180-708-932-81X","lens_id":["128-481-157-389-627","037-180-708-932-81X","199-529-703-039-313"],"sequence":52,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":27,"text":"Koch, C. et al. (1985). Shifts in selective visual attention: towards the underlying neural circuitry. Human Neurobiology, 4, 219-227.","npl_type":"a","external_id":["3836989"],"record_lens_id":"007-082-413-775-709","lens_id":["188-190-742-771-743","007-082-413-775-709"],"sequence":53,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":28,"text":"Kustov, A. A., & Robinson, D. L. (1996). Shared neural control of attentional shifts and eye movements. Nature, 384, 74-77.","npl_type":"a","external_id":["8900281","10.1038/384074a0"],"record_lens_id":"141-291-237-140-737","lens_id":["175-761-534-457-105","141-291-237-140-737","186-159-121-191-174"],"sequence":54,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":29,"text":"Kwak, H. W., & Egeth, H. (1992). Consequences of allocating attention to locations and to other attributes. Perception and Psychophysics, 51, 455-464.","npl_type":"a","external_id":["1594435","10.3758/bf03211641"],"record_lens_id":"015-072-139-859-459","lens_id":["109-915-706-248-218","015-072-139-859-459","178-060-075-335-084"],"sequence":55,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":30,"text":"Laberge, D., & Buchsbaum, M. S. (1990). Positron emission tomographic measurements of pulvinar activity during an attention task. Journal of Neuroscience, 10, 613-619.","npl_type":"a","external_id":["10.1523/jneurosci.10-02-00613.1990","2303863","pmc6570168"],"record_lens_id":"001-616-813-484-244","lens_id":["130-856-412-338-06X","001-616-813-484-244","170-788-442-027-827"],"sequence":56,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":31,"text":"Lee, D. K., Itti, L., Koch, C., & Braun, J. (1999). Attention activates winner-take-all competition among visual filters. Nature Neuroscience, 2, 375-381.","npl_type":"a","external_id":["10.1038/7286","10204546"],"record_lens_id":"050-954-310-099-079","lens_id":["073-512-321-977-056","050-954-310-099-079","128-129-005-413-699"],"sequence":57,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":32,"text":"Levitt, J. B., & Lund, J. S. (1997). Contrast dependence of contextual effects in primate visual cortex. Nature, 387, 73-76.","npl_type":"a","external_id":["9139823","10.1038/387073a0"],"record_lens_id":"033-827-416-703-651","lens_id":["057-080-774-031-70X","033-827-416-703-651","132-380-863-353-455"],"sequence":58,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":33,"text":"Luschow, A., & Nothdurft, H. C. (1993). Pop-out of orientation but no pop-out of motion at isoluminance. Vision Research, 33, 91-104.","npl_type":"a","external_id":["10.1016/0042-6989(93)90062-2","8451849"],"record_lens_id":"079-987-249-767-229","lens_id":["182-384-603-337-733","079-987-249-767-229","094-340-380-096-738"],"sequence":59,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":34,"text":"Malach, R. (1994). Cortical columns as devices for maximizing neuronal diversity. Trends in Neuroscience, 17, 101-104.","npl_type":"a","external_id":["10.1016/0166-2236(94)90113-9","7515522"],"record_lens_id":"129-632-322-784-236","lens_id":["142-368-511-389-340","129-632-322-784-236","187-682-384-789-752"],"sequence":60,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":35,"text":"Malach, R., Amir, Y., Harel, M., & Grinvald, A. (1993). Relationship between intrinsic connections and functional architecture revealed by optical imaging and in vivo targeted biocytin injections in primate striate cortex. Proceedings of the National Academy of Sciences of the United States of America, 90, 10469-10473.","npl_type":"a","external_id":["10.1073/pnas.90.22.10469","pmc47798","8248133"],"record_lens_id":"043-140-501-186-152","lens_id":["066-784-308-898-201","043-140-501-186-152","080-892-446-376-343"],"sequence":61,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":36,"text":"Malik, J., & Perona, P. (1990). Preattentive texture discrimination with early vision mechanisms. Journal of the Optical Society of America A, 7, 923-932.","npl_type":"a","external_id":["2338600","10.1364/josaa.7.000923"],"record_lens_id":"066-052-983-583-410","lens_id":["152-255-555-023-270","066-052-983-583-410","166-002-488-938-552"],"sequence":62,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":37,"text":"Motter, B. C., & Belky, E. J. (1998). The guidance of eye movements during active visual search. Vision Research, 38, 1805-1815.","npl_type":"a","external_id":["10.1016/s0042-6989(97)00349-0","9797959"],"record_lens_id":"015-398-581-630-278","lens_id":["176-309-476-484-070","015-398-581-630-278","178-805-994-411-272"],"sequence":63,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":38,"text":"Nakayama, K., & Mackeben, M. (1989). Sustained and transient components of focal visual attention. Vision Research, 29, 1631-1647.","npl_type":"a","external_id":["10.1016/0042-6989(89)90144-2","2635486"],"record_lens_id":"056-141-522-259-193","lens_id":["104-446-354-008-905","056-141-522-259-193","104-883-487-783-652"],"sequence":64,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":39,"text":"Niebur, E., & Koch, C. (1996). Control of selective visual attention: modeling the 'where' pathway. In D. Touretzky, M. Mozer, & M. Hasselmo, Neural information processing systems (NIPS 8), (802-808). Cambridge, MA: MIT Press.","npl_type":"a","external_id":[],"lens_id":[],"sequence":65,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":40,"text":"Niebur, E., & Koch, C. (1998). Computational architectures for attention. In R. Parasuraman, The attentive brain (pp. 163-186). Cambridge, MA: MIT Press.","npl_type":"a","external_id":[],"lens_id":[],"sequence":66,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":41,"text":"Niyogi, P., Girosi, F., & Poggio, T. (1998). Incorporating prior information in machine learning by creating virtual examples. Proceedings of the IEEE, 86, 2196-2209.","npl_type":"a","external_id":["10.1109/5.726787"],"record_lens_id":"058-358-796-460-505","lens_id":["190-807-007-978-069","058-358-796-460-505"],"sequence":67,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":42,"text":"Noton, D., & Stark, L. (1971). Scanpaths in eye movements during pattern perception. Science, 171, 308-311.","npl_type":"a","external_id":["5538847","10.1126/science.171.3968.308"],"record_lens_id":"007-613-448-841-120","lens_id":["010-667-461-965-576","007-613-448-841-120","057-252-981-836-525"],"sequence":68,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":43,"text":"O'Regan, J. K., Rensink, R. A., & Clark, J. J. (1999). Change-blindness as a result of 'mudsplashes'. Nature, 398, 34.","npl_type":"a","external_id":["10.1038/17953","10078528"],"record_lens_id":"038-569-481-896-747","lens_id":["068-834-595-812-17X","038-569-481-896-747","084-272-177-526-072"],"sequence":69,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":44,"text":"Olshausen, B. A., Anderson, C. H., & Van Essen, D. C. (1993). A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. Journal of Neuroscience, 13, 4700-4719.","npl_type":"a","external_id":["pmc6576339","10.1523/jneurosci.13-11-04700.1993","8229193"],"record_lens_id":"016-077-783-394-313","lens_id":["113-689-204-285-643","016-077-783-394-313","051-099-504-464-099"],"sequence":70,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":45,"text":"Poggio, T. (1997). Image representations for visual learning. Lecture Notes in Computer Science, 1206, 143. (Abstract Only).","npl_type":"a","external_id":[],"lens_id":[],"sequence":71,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":46,"text":"Polat, U., & Sagi, D. (1994). The architecture of perceptual spatial interactions. Vision Research, 34, 73-78.","npl_type":"a","external_id":["10.1016/0042-6989(94)90258-5","8116270"],"record_lens_id":"016-515-052-094-250","lens_id":["101-409-415-028-378","016-515-052-094-250","163-502-511-843-87X"],"sequence":72,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":47,"text":"Polat, U., & Sagi, D. (1994). Spatial interactions in human vision: from near to far via experience-dependent cascades of connections. Proceedings of the National Academy of Sciences of the United States of America, 91, 1206-1209.","npl_type":"a","external_id":["8108388","10.1073/pnas.91.4.1206","pmc43125"],"record_lens_id":"021-015-922-672-764","lens_id":["139-276-846-995-741","021-015-922-672-764","066-958-773-909-33X"],"sequence":73,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":48,"text":"Rao, R. P. N., & Ballard, D. H. (1995). An active vision architecture based on iconic representations. Artificial Intelligence, 78, 461-505.","npl_type":"a","external_id":["10.1016/0004-3702(95)00026-7"],"record_lens_id":"121-540-940-548-91X","lens_id":["189-479-414-080-293","121-540-940-548-91X"],"sequence":74,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":49,"text":"Robinson, D. L., & Petersen, S. E. (1992). The pulvinar and visual salience. Trends in Neuroscience, 15, 127-132.","npl_type":"a","external_id":["10.1016/0166-2236(92)90354-b","1374970"],"record_lens_id":"071-070-649-220-067","lens_id":["089-655-724-166-513","071-070-649-220-067","153-178-737-224-364"],"sequence":75,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":50,"text":"Rockland, K. S., & Lund, J. S. (1983). Intrinsic laminar lattice connections in primate visual cortex. Journal of Comparative Neurology, 216, 303-318.","npl_type":"a","external_id":["6306066","10.1002/cne.902160307"],"record_lens_id":"006-158-868-436-334","lens_id":["063-863-199-103-487","006-158-868-436-334","032-156-106-507-785"],"sequence":76,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":51,"text":"Rockland, K. S., Andresen, J., Cowie, R. J., & Robinson, D. L. (1999). Single axon analysis of pulvinocortical connections to several visual areas in the macaque. Journal of Comparative Neurology, 406, 221-250.","npl_type":"a","external_id":["10.1002/(sici)1096-9861(19990405)406:2<221::aid-cne7>3.0.co;2-k","10096608"],"record_lens_id":"121-517-280-549-232","lens_id":["156-194-204-578-185","121-517-280-549-232","193-866-957-528-87X"],"sequence":77,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":52,"text":"Saarinen, J., & Julesz, B. (1991). The speed of attentional shifts in the visual field. Proceedings of the National Academy of Sciences of the United States of America, 88, 1812-1814.","npl_type":"a","external_id":["pmc51115","10.1073/pnas.88.5.1812","2000387"],"record_lens_id":"014-295-807-184-648","lens_id":["143-257-919-172-721","014-295-807-184-648","062-646-317-795-326"],"sequence":78,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":53,"text":"Sheliga, B. M., Riggio, L., & Rizzolatti, G. (1994). Orienting of attention and eye movements. Experimental Brain Research, 98, 507-522.","npl_type":"a","external_id":["8056071","08056071","10.1007/bf00233988"],"record_lens_id":"021-349-364-942-861","lens_id":["123-361-902-124-664","021-349-364-942-861","135-325-145-125-439"],"sequence":79,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":54,"text":"Shepherd, M., Findlay, J. M., & Hockey, R. J. (1986). The relationship between eye movements and spatial attention. Quarterly Journal of Experimental Psychology, 38, 475-491.","npl_type":"a","external_id":["3763952","10.1080/14640748608401609"],"record_lens_id":"024-752-373-695-558","lens_id":["142-771-402-152-555","024-752-373-695-558","103-353-684-221-734"],"sequence":80,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":55,"text":"Sillito, A. M., & Jones, H. E. (1996). Context-dependent interactions and visual processing in V1. Journal of Physiology Paris, 90, 205-209.","npl_type":"a","external_id":["9116668","10.1016/s0928-4257(97)81424-6"],"record_lens_id":"046-872-240-102-549","lens_id":["101-803-560-815-162","046-872-240-102-549","113-873-494-598-17X"],"sequence":81,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":56,"text":"Simoncelli, E. P., Freeman, W. T., Adelson, E. H., & Heeger, D. J. (1992). Shiftable multiscale transforms. IEEE Transactions on Information Theory, 38, 587-607.","npl_type":"a","external_id":["10.1109/18.119725"],"record_lens_id":"064-564-219-221-437","lens_id":["147-545-092-603-205","064-564-219-221-437"],"sequence":82,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":57,"text":"Tootell, R. B., Hamilton, S. L., Silverman, M. S., & Switkes, E. (1988). Functional anatomy of macaque striate cortex. i. ocular dominance, binocular interactions, and baseline conditions. Journal of Neuroscience, 8, 1500-1530.","npl_type":"a","external_id":["pmc6569205","10.1523/jneurosci.08-05-01500.1988","3367209"],"record_lens_id":"063-164-611-324-28X","lens_id":["123-933-849-287-982","063-164-611-324-28X","121-092-790-476-60X"],"sequence":83,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":58,"text":"Treisman, A., & Gormican, S. (1988). Feature analysis in early vision: evidence from search asymmetries. Psychology Review, 95, 15-48.","npl_type":"a","external_id":["3353475","10.1037//0033-295x.95.1.15","10.1037/0033-295x.95.1.15"],"record_lens_id":"044-760-442-880-147","lens_id":["159-060-743-747-764","044-760-442-880-147","128-109-395-987-998","145-336-279-010-297","021-139-041-978-194"],"sequence":84,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":59,"text":"Ts'o, D. Y., Gilbert, C. D., & Wiesel, T. N. (1986). Relationships between horizontal interactions and functional architecture in cat striate cortex as revealed by cross-correlation analysis. Journal of Neuroscience, 6, 1160-1170.","npl_type":"a","external_id":["10.1523/jneurosci.06-04-01160.1986","3701413","pmc6568437"],"record_lens_id":"086-621-201-433-227","lens_id":["182-900-457-789-843","086-621-201-433-227","195-561-437-988-966"],"sequence":85,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":60,"text":"Wagenaar, W. A. (1969). Note on the construction of digram-balanced latin squares. Psychology Bulletin, 72, 384-386.","npl_type":"a","external_id":["10.1037/h0028329"],"record_lens_id":"030-155-313-725-163","lens_id":["175-589-313-757-321","030-155-313-725-163"],"sequence":86,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":61,"text":"Weliky, M., Kandler, K., Fitzpatrick, D., & Katz, L. C. (1995). Patterns of excitation and inhibition evoked by horizontal connections in visual cortex share a common relationship to orientation columns. Neuron, 15, 541-552.","npl_type":"a","external_id":["7546734","10.1016/0896-6273(95)90143-4"],"record_lens_id":"074-117-663-626-877","lens_id":["080-007-167-470-519","074-117-663-626-877","142-581-344-269-730"],"sequence":87,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":62,"text":"Wolfe, J. M. (1994). Visual search in continuous, naturalistic stimuli. Vision Research, 34, 1187-1195.","npl_type":"a","external_id":["8184562","10.1016/0042-6989(94)90300-x"],"record_lens_id":"109-891-748-668-540","lens_id":["138-982-113-553-550","109-891-748-668-540","159-695-833-457-890"],"sequence":88,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":63,"text":"Yuille, A. L., & Grzywacz, N. M. (1989). A mathematical-analysis of the motion coherence theory. International Journal of Computer Vision, 3, 155-175.","npl_type":"a","external_id":["10.1007/bf00126430"],"record_lens_id":"024-370-814-997-292","lens_id":["159-270-349-304-688","024-370-814-997-292"],"sequence":89,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":64,"text":"Zenger, B., & Sagi, D. (1996). Isolating excitatory and inhibitory nonlinear spatial interactions involved in contrast detection. Vision Research, 36, 2497-2513.","npl_type":"a","external_id":["10.1016/0042-6989(95)00303-7","8917811"],"record_lens_id":"091-704-374-055-695","lens_id":["123-188-353-864-996","091-704-374-055-695","174-013-384-718-460"],"sequence":90,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":65,"text":"J.M. Wolfe, \"Guided Search 2.0: A Revised Model of Visual Search,\" Psychonomic Bull. Rev., vol. 1, pp. 202-238 (1994).","npl_type":"a","external_id":["24203471","10.3758/bf03200774"],"record_lens_id":"134-129-530-852-822","lens_id":["178-729-679-374-158","143-985-059-491-292","134-129-530-852-822"],"sequence":91,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":66,"text":"P. Reinagel and A.M. Zador, \"Natural Scene Statistics at the centre of gaze,\" Network: Computation in Neural Systems, vol. 10, pp. 1-10 (1999).","npl_type":"a","external_id":["10.1088/0954-898x/10/4/304","10.1088/0954-898x_10_4_304","10695763"],"record_lens_id":"141-039-544-108-345","lens_id":["184-745-638-555-160","141-039-544-108-345","080-259-213-579-613","148-608-098-524-059","035-795-769-019-715"],"sequence":92,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":67,"text":"I. Kovacs and B. Julesz, \"A Closed Curve is Much More Than an Incomplete One: Effect of Closure in Figure-Ground Segmentation,\" Proc. Nat'l Academy of Sciences, U.S.A., vol. 90, No. 16, pp. 7,495-7,497 (Aug. 1993).","npl_type":"a","external_id":["pmc47168","8356044","10.1073/pnas.90.16.7495"],"record_lens_id":"008-704-307-029-18X","lens_id":["112-640-838-746-487","008-704-307-029-18X","031-422-038-993-197"],"sequence":93,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":68,"text":"S. Baluja and D.A. Pomerleau, \"Expectation-Based Selective Attention for Visual Monitoring and Control of a Robot Vehicle,\" Robotics and Autonomous Systems, vol. 22, No. 3-4, pp. 329-344 (Dec. 1997).","npl_type":"a","external_id":["10.1016/s0921-8890(97)00046-8"],"record_lens_id":"104-835-073-840-307","lens_id":["132-123-170-562-855","104-835-073-840-307"],"sequence":94,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":69,"text":"M.W. Cannon and S.C. Fullenkamp, \"A Model for Inhibitory Lateral Interaction Effects in Perceived Contrast,\" Vision Res., vol. 36, No. 8, pp. 1,115-1,125 (Apr. 1996).","npl_type":"a","external_id":["8762716","10.1016/0042-6989(95)00180-8"],"record_lens_id":"127-295-366-802-224","lens_id":["189-706-027-533-018","127-295-366-802-224","138-557-849-954-202"],"sequence":95,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":70,"text":"R. Milanese, S. Gil, and T. Pun, \"Attentive Mechanisms for Dynamic and Static Scene Analysis,\" Optical Eng., vol. 34, No. 8, pp. 2,428-2,434 (Aug. 1995).","npl_type":"a","external_id":[],"lens_id":[],"sequence":96,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":71,"text":"M.I. Posner and Y. Cohen, \"Components of Visual Orienting,\" H. Bouma and D.G. Bouwhuis, eds., Attention and Performance, vol. 10, pp. 531-556. Hilldale, N.J.: Erlbaum (1984).","npl_type":"a","external_id":[],"lens_id":[],"sequence":97,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":72,"text":"Parkhurst, D. et al., K. Law, and E. Niebur, \"Modeling the role of salience in the allocation of overt visual attention,\" Vision Research, vol. 42, No. 1, pp. 107-123, 2002.","npl_type":"a","external_id":["10.1016/s0042-6989(01)00250-4","11804636"],"record_lens_id":"004-942-994-780-124","lens_id":["097-057-347-222-830","004-942-994-780-124","121-957-163-200-92X"],"sequence":98,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":73,"text":"M. Cerf, D. Cleary, R. Peters, W. Einhauser, and C. Koch, \"Observers are consistent when rating image conspicuity,\" Vision Research, vol. 47, pp. 3052-3060, 2007.","npl_type":"a","external_id":["10.1016/j.visres.2007.06.025","17923144"],"record_lens_id":"038-652-970-458-702","lens_id":["109-774-389-998-595","157-905-933-962-958","038-652-970-458-702"],"sequence":99,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":74,"text":"Foulsham, T. et al., \"What can saliency models predict about eye movements? Spatial and sequential aspects of fixations during encoding and recognition,\"Journal of Vision, vol. 8, No. 2, pp. 1-17, 2008.","npl_type":"a","external_id":["18318632","10.1167/8.2.6"],"record_lens_id":"135-153-308-906-849","lens_id":["142-781-869-470-679","191-419-943-457-567","135-153-308-906-849"],"sequence":100,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":75,"text":"Oliva, A. et al., \"Top-down control of visual attention in object detection,\" in International Conference on Image Processing, 2003, pp. I:253-I:256.","npl_type":"a","external_id":["10.1109/icip.2003.1246946"],"record_lens_id":"043-055-517-470-768","lens_id":["146-645-628-583-161","043-055-517-470-768"],"sequence":101,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":76,"text":"S. Dickinson, H. Christensen, J. Tsotsos, and G. Olofsson, \"Active object recognition integrating attention and viewpoint control,\" in European Conference on Computer Vision, 1994, pp. 3-14.","npl_type":"a","external_id":["10.1007/bfb0028330"],"record_lens_id":"008-753-833-560-051","lens_id":["049-120-284-889-539","008-753-833-560-051"],"sequence":102,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":77,"text":"Itti, L. et al., C. Koch, and E. Niebur, \"A model for saliency-based visual attention for rapid scene analysis,\" IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, No. 11, pp. 1254-1259, 1998.","npl_type":"a","external_id":["10.1109/34.730558"],"record_lens_id":"096-717-380-007-985","lens_id":["135-400-810-419-62X","096-717-380-007-985"],"sequence":103,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":78,"text":"Itti, L. et al., \"A saliency-based search mechanism for overt and covert shifts of visual attention,\" Vision Research, vol. 40, pp. 1489-1506, 2000.","npl_type":"a","external_id":["10.1016/s0042-6989(99)00163-7","10788654"],"record_lens_id":"140-363-879-933-433","lens_id":["185-071-924-028-431","140-363-879-933-433","183-578-767-461-150"],"sequence":104,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":79,"text":"W. Einhauser, U. Rutishauser, E. Frady, S. Nadler, P. Konig, and C. Koch, \"The relation of phase noise and luminance contrast to overt attention in complex visual stimuli,\"Journal of Vision, vol. 6, No. 11, pp. 1:1148-1:1158, 2006.","npl_type":"a","external_id":["10.1167/6.11.1"],"record_lens_id":"020-471-415-831-102","lens_id":["020-471-415-831-102"],"sequence":105,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":80,"text":"M. Cerf, E. Frady, and C. Koch, \"Faces and text attract gaze independent of the task: Experimental data and computer model,\" Journal of Vision, vol. 9, No. 12, pp. 10:1-10:15, 2009.","npl_type":"a","external_id":["10.1167/9.12.10","20053101"],"record_lens_id":"041-754-772-638-492","lens_id":["093-742-547-123-052","169-828-034-135-78X","041-754-772-638-492"],"sequence":106,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":81,"text":"M. Donk and W. van Zoest, \"Effects of salience are short-lived,\" Psychological Science, vol. 19, No. 7, pp. 733-739, 2008.","npl_type":"a","external_id":["18727790","10.1111/j.1467-9280.2008.02149.x"],"record_lens_id":"026-232-189-942-858","lens_id":["109-372-933-961-285","041-932-673-793-146","026-232-189-942-858"],"sequence":107,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":82,"text":"Masciocchi, C. et al., \"Everyone knows what is interesting: Salient locations which should be fixated,\" Journal of Vision, vol. 9, No. 11, pp. 25:1-25:22, 2009.","npl_type":"a","external_id":["10.1167/9.11.25","pmc2915572","20053088"],"record_lens_id":"031-663-885-555-216","lens_id":["124-573-514-228-534","136-785-594-596-045","031-663-885-555-216"],"sequence":108,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":83,"text":"Itti, L., Models of bottom-up attention and saliency, In L. Itti, G. Rees, & J.K. Tsotsos (Eds.), Neurobiology of attention, 2005 (pp. 576-582). Elsevier.","npl_type":"a","external_id":["10.1016/b978-012375731-9/50098-7"],"record_lens_id":"067-485-388-046-338","lens_id":["182-496-499-769-988","067-485-388-046-338"],"sequence":109,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":84,"text":"L. Itti and P. Baldi, \"Bayesian surprise attracts human attention,\" in Advances in Neural Information Processing Systems, 2006, pp. 547-554.","npl_type":"a","external_id":[],"lens_id":[],"sequence":110,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":85,"text":"Bruce N. et al., \"Saliency based on information maximization,\" in Advances in Neural Information Processing Systems, 2006, pp. 155-162.","npl_type":"a","external_id":[],"lens_id":[],"sequence":111,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":86,"text":"Harel, J. et al., \"Graph-based visual saliency,\" in Advances in Neural Information Processing Systems, 2007, pp. 545-552.","npl_type":"a","external_id":["10.7551/mitpress/7503.003.0073"],"record_lens_id":"029-059-787-558-293","lens_id":["029-059-787-558-293"],"sequence":112,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":87,"text":"Nothdurft, H.,\"Salience from feature contrast: Additivity across dimensions,\" Vision Research, vol. 40, pp. 1183-1201, 2000.","npl_type":"a","external_id":["10788635","10.1016/s0042-6989(00)00031-6"],"record_lens_id":"154-425-191-111-975","lens_id":["192-245-555-965-497","154-425-191-111-975","190-098-252-847-784"],"sequence":113,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":88,"text":"Hou, X., & Zhang, L. (2008). Dynamic visual attention: searching for coding length increments. In Advances in neural information processing systems. (pp. 1-8).","npl_type":"a","external_id":[],"lens_id":[],"sequence":114,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":89,"text":"Y. Freund and R. Schapire, \"Game theory, on-line prediction and boosting,\" in Conference on Computational Learning Theory, 1996, pp. 325-332.","npl_type":"a","external_id":["10.1145/238061.238163"],"record_lens_id":"048-553-502-735-448","lens_id":["137-733-131-726-319","048-553-502-735-448"],"sequence":115,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":90,"text":"J. Friedman, T. Nestle, and R. Tibshirani, \"Additive logistic regression: A statistical view of boosting,\" The Annals of Statistics, vol. 28, No. 2, pp. 337-374, 2000.","npl_type":"a","external_id":["10.1214/aos/1016218223"],"record_lens_id":"062-773-822-277-10X","lens_id":["134-368-382-961-928","062-773-822-277-10X"],"sequence":116,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":91,"text":"R. Schapire and Y. Singer, \"Improved boosting algorithms using confidence-rated predictions,\" Machine Learning, vol. 37, No. 3, pp. 297-336, 1999.","npl_type":"a","external_id":["10.1145/279943.279960"],"record_lens_id":"169-435-687-675-76X","lens_id":["198-988-592-219-704","169-435-687-675-76X"],"sequence":117,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":92,"text":"A. Vezhnevets and V. Vezhnevets, \"Modest adaboost-teaching adaboost to generalize better,\" in Graphicon, pp. 1-4, 2005.","npl_type":"a","external_id":[],"lens_id":[],"sequence":118,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":93,"text":"M. Cerf, J. Harel, W. Einhauser, and C. Koch, \"Predicting human gaze using low-level saliency combined with face detection,\" in Advances in Neural Information Processing Systems, 2008, pp. 241-248.","npl_type":"a","external_id":[],"lens_id":[],"sequence":119,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":94,"text":"Gao, D., Mahadevan, V., & Vasconcelos, N. (2007). The discriminant center-surround hypothesis for bottom-up saliency. In Advances in neural information processing systems (pp. 497-504).","npl_type":"a","external_id":[],"lens_id":[],"sequence":120,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":95,"text":"D. Brainard, \"The psychophysics toolbox,\" Spatial Vision, vol. 10, No. 4, pp. 433-436,1997.","npl_type":"a","external_id":["10.1163/156856897x00357","9176952"],"record_lens_id":"010-629-061-233-301","lens_id":["016-636-627-457-106","010-629-061-233-301","063-660-217-618-303","174-614-896-600-857"],"sequence":121,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":96,"text":"F. Cornelissen, E. Peters, and J. Palmer, \"The eyelink toolbox: Eye tracking with matlab and the psychophysics toolbox,\" Behavior Research Methods, Instruments and Computers, vol. 34, No. 4, pp. 613-617, 2002.","npl_type":"a","external_id":["10.3758/bf03195489","12564564"],"record_lens_id":"064-285-680-655-442","lens_id":["143-826-564-769-517","169-643-481-113-901","064-285-680-655-442"],"sequence":122,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":97,"text":"Y. Rubner, C. Tomasi, and L. Guibas, \"The earth mover's distance as a metric for image retrieval,\" International Journal of Computer Vision, vol. 40, No. 2, pp. 99-121, 2000.","npl_type":"a","external_id":[],"lens_id":[],"sequence":123,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":98,"text":"R. Peters, A. Iyer, L. Itti, and C. Koch, \"Components of bottom-up gaze allocation in natural images,\" Vision Research, vol. 45, No. 18, pp. 2397-2416, 2005.","npl_type":"a","external_id":["15935435","10.1016/j.visres.2005.03.019"],"record_lens_id":"016-062-181-014-756","lens_id":["047-375-300-214-075","016-062-181-014-756","171-768-394-645-804"],"sequence":124,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}},{"npl":{"num":99,"text":"Koene, A. et al., \"Feature-specific interactions in salience from combined feature contrasts: Evidence for a bottom-up saliency map in v1,\" Journal of Vision, vol. 7, No. 7, pp. 6:1-6:14, 2007.","npl_type":"a","external_id":["10.1167/7.7.6","17685802"],"record_lens_id":"073-377-793-794-769","lens_id":["155-709-932-972-23X","073-377-793-794-769","149-938-955-098-065"],"sequence":125,"category":[],"us_category":[],"cited_phase":"APP","rel_claims":[]}}],"reference_cited.source":"DOCDB","reference_cited.patent_count":47,"cites_patent":true,"reference_cited.npl_count":101,"reference_cited.npl_resolved_count":86,"cites_npl":true,"cites_resolved_npl":true,"cited_by":{"patent_count":23,"patent":[{"lens_id":"109-749-001-043-334","document_id":{"jurisdiction":"US","doc_number":"10198629","kind":"B2"}},{"lens_id":"010-686-036-842-579","document_id":{"jurisdiction":"US","doc_number":"11545131","kind":"B2"}},{"lens_id":"081-660-927-162-032","document_id":{"jurisdiction":"US","doc_number":"20190095741","kind":"A1"}},{"lens_id":"014-146-535-011-188","document_id":{"jurisdiction":"US","doc_number":"10860020","kind":"B2"}},{"lens_id":"131-076-227-508-871","document_id":{"jurisdiction":"US","doc_number":"11314985","kind":"B2"}},{"lens_id":"001-952-759-548-747","document_id":{"jurisdiction":"US","doc_number":"10643106","kind":"B2"}},{"lens_id":"132-675-564-299-864","document_id":{"jurisdiction":"US","doc_number":"20170236013","kind":"A1"}},{"lens_id":"100-236-062-147-814","document_id":{"jurisdiction":"US","doc_number":"9996771","kind":"B2"}},{"lens_id":"004-289-868-401-50X","document_id":{"jurisdiction":"US","doc_number":"10748023","kind":"B2"}},{"lens_id":"154-811-466-318-330","document_id":{"jurisdiction":"US","doc_number":"8928815","kind":"B1"}},{"lens_id":"108-076-948-767-255","document_id":{"jurisdiction":"US","doc_number":"10599946","kind":"B2"}},{"lens_id":"108-797-721-571-426","document_id":{"jurisdiction":"US","doc_number":"10176396","kind":"B2"}},{"lens_id":"164-662-897-457-005","document_id":{"jurisdiction":"US","doc_number":"10452905","kind":"B2"}},{"lens_id":"094-760-872-552-999","document_id":{"jurisdiction":"US","doc_number":"9965031","kind":"B2"}},{"lens_id":"190-902-484-182-034","document_id":{"jurisdiction":"US","doc_number":"9700276","kind":"B2"}},{"lens_id":"025-856-153-102-282","document_id":{"jurisdiction":"US","doc_number":"20160132749","kind":"A1"}},{"lens_id":"105-916-021-021-319","document_id":{"jurisdiction":"US","doc_number":"9754163","kind":"B2"}},{"lens_id":"072-573-792-236-273","document_id":{"jurisdiction":"US","doc_number":"11521377","kind":"B1"}},{"lens_id":"016-406-590-288-359","document_id":{"jurisdiction":"US","doc_number":"9928418","kind":"B2"}},{"lens_id":"046-526-725-430-629","document_id":{"jurisdiction":"US","doc_number":"10013612","kind":"B2"}},{"lens_id":"078-513-902-025-121","document_id":{"jurisdiction":"US","doc_number":"20130245429","kind":"A1"}},{"lens_id":"000-858-538-230-762","document_id":{"jurisdiction":"US","doc_number":"20160026245","kind":"A1"}},{"lens_id":"017-248-134-399-819","document_id":{"jurisdiction":"US","doc_number":"10782777","kind":"B2"}}]},"cited_by_patent":true,"family":{"simple":{"size":3,"id":214955574,"member":[{"lens_id":"146-532-209-798-204","document_id":{"jurisdiction":"US","doc_number":"8649606","kind":"B2","date":"2014-02-11"}},{"lens_id":"055-487-661-596-853","document_id":{"jurisdiction":"WO","doc_number":"2011152893","kind":"A1","date":"2011-12-08"}},{"lens_id":"044-187-668-902-557","document_id":{"jurisdiction":"US","doc_number":"20110229025","kind":"A1","date":"2011-09-22"}}]},"extended":{"size":3,"id":214099529,"member":[{"lens_id":"146-532-209-798-204","document_id":{"jurisdiction":"US","doc_number":"8649606","kind":"B2","date":"2014-02-11"}},{"lens_id":"055-487-661-596-853","document_id":{"jurisdiction":"WO","doc_number":"2011152893","kind":"A1","date":"2011-12-08"}},{"lens_id":"044-187-668-902-557","document_id":{"jurisdiction":"US","doc_number":"20110229025","kind":"A1","date":"2011-09-22"}}]}},"has_sequence":false,"legal_status":{"ipr_type":"patent for invention","granted":true,"earliest_filing_date":"2011-02-10","grant_date":"2014-02-11","anticipated_term_date":"2031-05-09","has_disclaimer":false,"patent_status":"ACTIVE","publication_count":2,"has_spc":false,"has_grant_event":true,"has_entry_into_national_phase":false},"abstract":{"en":[{"text":"Methods and systems for generating saliency models are discussed. Saliency models can be applied to visual scenes to generate predictions on which locations in the visual scenes are fixation locations and which locations are nonfixation locations. Saliency models are learned from fixation data on the visual scenes obtained from one or more subjects.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"}]},"abstract_lang":["en"],"has_abstract":true,"claim":{"en":[{"text":"1. A method for generating a saliency model from one or more images, the saliency model adapted to predict whether a particular location in an image is a fixation location or a nonfixation location, the method comprising: providing visual fixation data from one or more subjects, wherein the visual fixation data comprises information on whether each location in the one or more images viewed by the one or more subjects is a fixation location or a nonfixation location; constructing one or more visual fixation maps based on the visual fixation data; providing a plurality of feature channels, wherein each feature channel in the plurality of feature channels detects a particular feature in the one or more images; constructing a plurality of conspicuity maps, wherein each conspicuity map in the plurality of conspicuity maps is associated with one feature channel in the plurality of feature channels, and wherein a particular conspicuity map is constructed based on features detected by a feature channel associated with the particular conspicuity map; and generating a plurality of weights, wherein the plurality of weights is a function of the one or more visual fixation maps and the plurality of conspicuity maps, thus generating the saliency model, wherein each weight in the plurality of weights is associated with one conspicuity map in the plurality of conspicuity maps.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"2. The method of claim 1 , wherein the generating a plurality of weights comprises: applying each weight in the plurality of weights to a conspicuity map associated with said each weight in the plurality of weights to obtain a plurality of weighted conspicuity maps, computing a difference between the weighted conspicuity maps and the one or more visual fixation maps; and adjusting each weight in the plurality of weights based on the difference, iteratively performing the applying, the computing, and the adjusting until the difference between the weighted conspicuity maps and the one or more visual fixation maps is minimum, thus generating the plurality of weights.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"3. The method of claim 1 , wherein the generating a plurality of weights is performed using an active set method.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"4. The method of claim 1 , wherein each weight is nonnegative.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"5. The method of claim 1 , wherein the constructing one or more visual fixation maps comprises: generating one or more initial fixation maps from the visual fixation data; convolving each of the one or more initial fixation maps with an isotropic Gaussian kernel to form the one or more visual fixation maps, wherein information from the one or more visual fixation maps is contained within a vectorized fixation map M fix .","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"6. The method of claim 1 , wherein the constructing one or more visual fixation maps comprises: generating one or more initial fixation maps from the visual fixation data; convolving each of the one or more initial fixation maps with an isotropic Gaussian kernel to form the one or more visual fixation maps, wherein information from the one or more visual fixation maps is contained within a vectorized fixation map M fix ; and wherein the constructing a plurality of conspicuity maps comprises: generating, for each conspicuity map, a feature vector based on features detected by a feature channel associated with the conspicuity map in the one or more images; and constructing a conspicuity matrix V, wherein each column or row of the conspicuity matrix is a feature vector, and wherein the generating a plurality of weights comprises generating a vector of weights w by solving a linear regression equation","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"7. The method of claim 1 , wherein the constructing a plurality of conspicuity maps comprises, for at least one of the feature channels: down-sampling the one or more images to obtain raw feature maps, wherein the raw feature maps comprise maps associated with a particular feature channel at different spatial resolutions; calculating differences between two maps associated with the particular feature channel at different spatial resolutions to generate a plurality of center-surround difference maps; and performing across-scale addition by summing the plurality of center-surround difference maps to obtain a conspicuity map associated with the particular feature channel.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"8. The method of claim 1 , wherein each feature channel is selected from the group consisting of a color feature channel, an intensity feature channel, an orientation feature channel, and a face feature channel.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"9. A method for generating a saliency map for an input image, comprising: generating a saliency model using the method of claim 1 ; and applying the saliency model at each location in the input image to generate the saliency map, wherein the saliency map provides a prediction of whether a particular location in the input image, when viewed by one or more subjects, will be a fixation location or a nonfixation location.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"10. A method for generating a saliency model from one or more images, the saliency model adapted to predict whether a particular location in an image is a fixation location or a nonfixation location, the method comprising: providing visual fixation data from one or more subjects, wherein the visual fixation data comprises information on whether each location in the one or more images viewed by the one or more subjects is a fixation location or a nonfixation location; selecting a first plurality of locations in the one or more images, wherein each location in the first plurality of locations is a fixation location; selecting a second plurality of locations in the one or more images, wherein each location in the second plurality of locations is a nonfixation location; setting sample weights applied to each location in the first and second plurality of locations to a common value; providing a plurality of feature channels, wherein each feature channel detects a particular feature in the one or more images; generating a plurality of relevant maps for each feature channel based on features detected by each feature channel, wherein a feature value at each location in a particular relevant map is based on features detected by a feature channel associated with the particular relevant map; selecting a relevant map from among the plurality of relevant maps that minimizes weighted classification errors for the first and second plurality of locations, wherein a classification error occurs when a relevant map indicates that a particular location in the first plurality of locations is a nonfixation location or when a relevant map indicates that a particular location in the second plurality of locations is a fixation location, and wherein each classification error is multiplied with the sample weight associated with the particular location to form a weighted classification error; generating a thresholded relevant map for the selected relevant map, wherein a value associated with a particular location in the thresholded relevant map is either a first value or a second value, and wherein the value associated with the particular location is based on the feature value at the particular location in the selected relevant map; calculating a relevance weight applied to the selected relevant map based on the classification errors associated with the selected relevant map; adjusting the sample weights applied to each location in the first and second plurality of locations based on the relevance weight and the classification errors associated with the selected relevant map; iteratively performing the relevant map selecting, the relevance weight calculating, and the sample weight adjusting for a pre-defined number of times; and generating the saliency model based on a sum of a product between each relevance weight and the thresholded relevant map corresponding to the selected relevant map corresponding to the relevance weight.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"11. The method of claim 10 , wherein the generating a plurality of relevant maps comprises: down-sampling the one or more input images to obtain raw feature maps, wherein the raw feature maps comprise maps associated with a particular feature channel at different spatial resolutions and maps associated with different feature channels; calculating differences between two maps associated with the particular feature channel at different spatial resolutions to generate a plurality of center-surround difference maps; and performing across-scale addition by summing the plurality center-surround difference maps to obtain a plurality of conspicuity maps, wherein the relevant maps comprise the raw feature maps, the plurality of centersurround difference maps, and the plurality of conspicuity maps.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"12. The method of claim 10 , wherein the selecting a relevant map comprises, for each relevant map: providing a set of threshold values; classifying each location in a particular relevant map as a fixation location or a nonfixation location based on a comparison between the feature value at each location in the particular relevant map and each threshold value in the set of threshold values; and calculating the classification errors of the particular relevant map based on a threshold value from among the set of threshold values; and selecting the relevant map and the threshold value that minimizes the classification errors.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"13. The method of claim 12 , wherein the generating a thresholded relevant map comprises, for each location in the thresholded relevant map: setting a value associated with a particular location based on comparing a feature value at the particular location in the selected relevant map and the selected threshold value.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"14. The method of claim 10 , wherein the adjusting the sample weights comprises increasing weight of samples associated with a classification error and/or decreasing weight of samples not associated with a classification error.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"15. The method of claim 10 , wherein each feature channel is selected from the group consisting of a color feature channel, an intensity feature channel, an orientation feature channel, and a face feature channel.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"16. A method for generating a saliency map for an input image, comprising: generating a saliency model using the method of claim 10 ; and applying the saliency model at each location in the input image to generate the saliency map, wherein the saliency map provides a prediction of whether a particular location in the input image, when viewed by one or more subjects, will be a fixation location or a nonfixation location.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"17. A system for generating a saliency model from one or more images, the saliency model adapted to predict whether a particular location in an image is a fixation location or a nonfixation location, the system adapted to receive visual fixation data from one or more subjects, the system comprising: a visual fixation data mapping module configured to generate one or more visual fixation maps based on the visual fixation data, wherein the visual fixation data comprises information on whether each location in the one or more images viewed by the one or more subjects is a fixation location or a nonfixation location; a feature detection module comprising a plurality of feature channels, wherein each feature channel detects a particular feature in the one or more images; a conspicuity map generating module for conspicuity map construction for each feature channel based on features detected by the feature detection module for each feature channel, wherein the conspicuity map generating module is adapted to receive information for each feature channel from the feature detection module and is adapted to output a plurality of conspicuity maps; a weight computation module for generation of a plurality of weights, wherein the plurality of weights is a function of the one or more visual fixation maps from the visual fixation data mapping module and the plurality of conspicuity maps from the conspicuity map generating module, thus generating the saliency model, wherein each weight in the plurality of weights is associated with one conspicuity map in the plurality of conspicuity maps.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"18. A system for generating a saliency model from one or more images, the saliency model adapted to predict whether a particular location in an image is a fixation location or a nonfixation location, the system adapted to receive visual fixation data from one or more subjects, the system comprising: a visual fixation data mapping module configured to generate one or more visual fixation maps based on the visual fixation data, wherein the visual fixation data comprises information on whether each location in one or more images viewed by the one or more subjects is a fixation location or a nonfixation location; a sampling module for selection of a first plurality of locations in the one or more images and a second plurality of locations in the one or more images, wherein each location in the first plurality of locations is a fixation location and each location in the second plurality of locations is a nonfixation location; a feature detection module comprising a plurality of feature channels, wherein each feature channel detects a particular feature in the one or more images; a map generating module for map construction for each feature channel based on features detected by the feature detection module for each feature channel, wherein the map generating module is adapted to receive information for each feature channel from the feature detection module and is adapted to output a plurality of relevant maps, and wherein a feature value at each location in a particular relevant map is based on features detected by a feature channel associated with the particular relevant map; a map selecting module for selection of one or more relevant maps from among the plurality of relevant maps based on classification errors, wherein a classification error occurs when a relevant map indicates that a particular location in the first plurality of locations is a fixation location or when a relevant map indicates that a particular location in the second plurality of locations is a nonfixation location, and wherein the selected relevant maps are associated with minimum weighted classification errors, wherein each classification error is multiplied with the sample weight associated with the particular location to form a weighted classification error; a sample weighting module for iterative setting and adjustment of sample weights, wherein a weight initially applied to each location in the first and second plurality of locations is a common value, and wherein the weight applied to each location in the first and second plurality of locations is adjusted based on the classification errors associated with the one or more selected relevant maps; a relevance weighting module for relevant weight calculation based on the classification errors associated with the one or more selected relevant maps; and a saliency model constructing module for saliency model construction based on the relevance weights and the one or more selected relevant maps.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"19. A non-transitory computer readable medium comprising computer executable software code stored in said medium, which computer executable software code, upon execution, carries out a method for generating a saliency model from one or more images, the saliency model adapted to predict whether a particular location in an image is a fixation location or a nonfixation location, the method comprising: providing visual fixation data from one or more subjects, wherein the visual fixation data comprises information on whether each location in the one or more images viewed by the one or more subjects is a fixation location or a nonfixation location; constructing one or more visual fixation maps based on the visual fixation data; providing a plurality of feature channels, wherein each feature channel in the plurality of feature channels detects a particular feature in the one or more images; constructing a plurality of conspicuity maps, wherein each conspicuity map in the plurality of conspicuity maps is associated with one feature channel in the plurality of feature channels, and wherein a particular conspicuity map is constructed based on features detected by a feature channel associated with the particular conspicuity map; and generating a plurality of weights, wherein the plurality of weights is a function of the one or more visual fixation maps and the plurality of conspicuity maps, thus generating the saliency model, wherein each weight in the plurality of weights is associated with one conspicuity map in the plurality of conspicuity maps.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"},{"text":"20. A non-transitory computer readable medium comprising computer executable software code stored in said medium, which computer executable software code, upon execution, carries out a method for generating a saliency model from one or more images, the saliency model adapted to predict whether a particular location in an image is a fixation location or a nonfixation location, the method comprising: providing visual fixation data from one or more subjects, wherein the visual fixation data comprises information on whether each location in the one or more images viewed by the one or more subjects is a fixation location or a nonfixation location; selecting a first plurality of locations in the one or more images, wherein each location in the first plurality of locations is a fixation location; selecting a second plurality of locations in the one or more images, wherein each location in the second plurality of locations is a nonfixation location; setting sample weights applied to each location in the first and second plurality of locations to a common value; providing a plurality of feature channels, wherein each feature channel detects a particular feature in the one or more images; generating a plurality of relevant maps for each feature channel based on features detected by each feature channel, wherein a feature value at each location in a particular relevant map is based on features detected by a feature channel associated with the particular relevant map; selecting a relevant map from among the plurality of relevant maps that minimizes weighted classification errors for the first and second plurality of locations, wherein a classification error occurs when a relevant map indicates that a particular location in the first plurality of locations is a nonfixation location or when a relevant map indicates that a particular location in the second plurality of locations is a fixation location, and wherein each classification error is multiplied with the sample weight associated with the particular location to form a weighted classification error; generating a thresholded relevant map for the selected relevant map, wherein a value associated with a particular location in the thresholded relevant map is either a first value or a second value, and wherein the value associated with the particular location is based on the feature value at the particular location in the selected relevant map; calculating a relevance weight applied to the selected relevant map based on the classification errors associated with the selected relevant map; adjusting the sample weights applied to each location in the first and second plurality of locations based on the relevance weight and the classification errors associated with the selected relevant map; iteratively performing the relevant map selecting, the relevance weight calculating, and the sample weight adjusting for a pre-defined number of times; and generating the saliency model based on a sum of a product between each relevance weight and the thresholded relevant map corresponding to the selected relevant map corresponding to the relevance weight.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"}]},"claim_lang":["en"],"has_claim":true,"description":{"en":{"text":"CROSS REFERENCE TO RELATED APPLICATIONS The present application claims priority to U.S. Provisional Application No. 61/303,225, entitled “Combining the Saliency Map Approach with Adaboost Improves Predictability of Eye Movements in Natural Scenes”, filed on Feb. 10, 2010, incorporated herein by reference in its entirety. The present application may be related to U.S. application Ser. No. 11/430,684 entitled “Computation of Intrinsic Perceptual Saliency in Visual Environments, and Applications”, filed on May 8, 2006, incorporated herein by reference in its entirety. STATEMENT OF GOVERNMENT GRANT This invention was made with government support under Grant No. HR0011-10-C-0034 awarded by DARPA and Grant No. N00014-10-1-0278 awarded by the Office of Naval Research. The government has certain rights in the invention. FIELD The disclosure relates generally to saliency models. More specifically, it relates to methods and systems for generating saliency models through linear and/or nonlinear integration. BACKGROUND Humans and other primates move their eyes to select visual information from a visual scene. This eye movement allows them to direct the fovea, which is the high-resolution part of their retina, onto relevant parts of the visual scene, thereby allowing processing of relevant visual information and allowing interpretation of complex scenes in real time. Commonalities between fixation patterns in synthetic or natural scenes from different individuals allow forming of computational models for predicting where people look in a visual scene (see references [1], [2], [3], and [4], incorporated herein by reference in their entireties). There are several models inspired by putative neural mechanisms for making such predictions (see reference [5], incorporated herein by reference in its entirety). One sensory-driven model of visual attention focuses on low-level features of a visual scene to evaluate salient areas of the visual scene. By way of example and not of limitation, low-level features generally refer to color, intensity, orientation, texture, motion, and depth present in the visual scene. With origins in Feature Integration Theory by Treisman and Gelade in 1980 (see reference [45], incorporated herein by reference in its entirety) and a proposal by Koch and Ullman (see reference [38], incorporated herein by reference in its entirety) for a map of a primate visual system that encodes extent to which any location in a given subject's field of view is conspicuous or salient, a series of ever refined models have been designed to predict where subjects will fixate their visual attention in synthetic or natural scenes (see references [1], [3], [4], [6], and [7], incorporated herein by reference in their entireties) based on bottom-up, task-independent factors (see references [31] and [47], incorporated herein by reference in their entireties). As used here, a “task” refers to a visual task that a subject may have in mind while viewing a visual scene. For instance, the subject may be given an explicit task or may give himself/herself an arbitrary, self-imposed task to “search for a red ball” in a visual scene. By viewing the visual scene with the task in mind (in this example, searching for red balls in the visual scene), the subject's viewing of the visual scene will be dependent on the particular task. Bottom-up, task independent factors are those factors found in the visual scene that attract a subject's attention strongly and rapidly, independent of tasks the subject is performing. These bottom-up factors depend on instantaneous sensory input and usually relate to properties of an input image or video. Bottom-up factors are in contrast to top-down factors, which take into account internal states of the subject such as goals of the subject at the time of viewing and personal experiences of the subject. An example of a goal would be that provided in the previous example of searching for a red ball, where the subject views the visual scene with a particular emphasis (or goal) on finding a red ball. An example of a personal experience would be if, upon seeing one piece of furniture in the visual scene, the subject's personal experience tells the subject to search for other pieces of furniture because the visual scene is likely to be displaying an interior of a building. As another example, a top-down factor can involve storing an image of a face of a specific person that a subject is searching for and correlating the image of the face across an entire visual scene. Correlating refers to matching a feature (the face from the image in this example) at different locations of the visual scene. A high correlation would correspond with a location where the feature is likely to be located. In these prediction models (see references [1], [6], and [7]), information on low-level features of any visual scene (such as color, intensity, and orientation) are combined to produce feature maps through center-surround filtering across numerous spatial scales (see references [1] and [6]). These feature maps across numerous spatial scales are normalized and combined to create an overall saliency map of the visual scene, which predicts human fixations significantly above chance (see reference [7]). SUMMARY According to a first aspect of the disclosure, a method for generating a saliency model from one or more images is provided, the saliency model adapted to predict whether a particular location in an image is a fixation location or a nonfixation location, the method comprising: providing visual fixation data from one or more subjects, wherein the visual fixation data comprises information on whether each location in the one or more images viewed by the one or more subjects is a fixation location or a nonfixation location; constructing one or more visual fixation maps based on the visual fixation data; providing a plurality of feature channels, wherein each feature channel detects a particular feature in the one or more images; constructing a plurality of conspicuity maps, wherein each conspicuity map is associated with one feature channel in the plurality of feature channels, and wherein a particular conspicuity map is constructed based on features detected by a feature channel associated with the particular conspicuity map; and generating a plurality of weights, wherein the plurality of weights is a function of the one or more visual fixation maps and the plurality of conspicuity maps, thus generating the saliency model, wherein each weight is associated with one conspicuity map in the plurality of conspicuity maps. According to a second aspect of the disclosure, a method for generating a saliency model from one or more images is provided, the saliency model adapted to predict whether a particular location in an image is a fixation location or a nonfixation location, the method comprising: providing visual fixation data from one or more subjects, wherein the visual fixation data comprises information on whether each location in the one or more images viewed by the one or more subjects is a fixation location or a nonfixation location; selecting a first plurality of locations in the one or more images, wherein each location in the first plurality of locations is a fixation location; selecting a second plurality of locations in the one or more images, wherein each location in the second plurality of locations is a nonfixation location; setting sample weights applied to each location in the first and second plurality of locations to a common value; providing a plurality of feature channels, wherein each feature channel detects a particular feature in the one or more images; generating a plurality of relevant maps for each feature channel based on features detected by each feature channel, wherein a feature value at each location in a particular relevant map is based on features detected by a feature channel associated with the particular relevant map; selecting a relevant map from among the plurality of relevant maps that minimizes weighted classification errors for the first and second plurality of locations, wherein a classification error occurs when a relevant map indicates that a particular location in the first plurality of locations is a nonfixation location or when a relevant map indicates that a particular location in the second plurality of locations is a fixation location, and wherein each classification error is multiplied with the sample weight associated with the particular location to form a weighted classification error; generating a thresholded relevant map for the selected relevant map, wherein a value associated with a particular location in the thresholded relevant map is either a first value or a second value, and wherein the value associated with the particular location is based on the feature value at the particular location in the selected relevant map; calculating a relevance weight applied to the selected relevant map based on the classification errors associated with the selected relevant map; adjusting the sample weights applied to each location in the first and second plurality of locations based on the relevance weight and the classification errors associated with the selected relevant map; iteratively performing the relevant map selecting, the relevance weight calculating, and the sample weight adjusting for a pre-defined number of times; and generating the saliency model based on a sum of a product between each relevance weight and the thresholded relevant map corresponding to the selected relevant map corresponding to the relevance weight. According to a third aspect of the disclosure, a system for generating a saliency model from one or more images is provided, the saliency model adapted to predict whether a particular location in an image is a fixation location or a nonfixation location, the system adapted to receive visual fixation data from one or more subjects, the system comprising: a visual fixation data mapping module configured to generate one or more visual fixation maps based on the visual fixation data, wherein the visual fixation data comprises information on whether each location in the one or more images viewed by the one or more subjects is a fixation location or a nonfixation location; a feature detection module comprising a plurality of feature channels, wherein each feature channel detects a particular feature in the one or more images; a conspicuity map generating module for conspicuity map construction for each feature channel based on features detected by the feature detection module for each feature channel, wherein the conspicuity map generating module is adapted to receive information for each feature channel from the feature detection module and is adapted to output a plurality of conspicuity maps; a weight computation module for generation of a plurality of weights, wherein the plurality of weights is a function of the one or more visual fixation maps from the visual fixation data mapping module and the plurality of conspicuity maps from the conspicuity map generating module, thus generating the saliency model, wherein each weight in the plurality of weights is associated with one conspicuity map in the plurality of conspicuity maps. According to a fourth aspect of the disclosure, a system for generating a saliency model from one or more images is provided, the saliency model adapted to predict whether a particular location in an image is a fixation location or a nonfixation location, the system adapted to receive visual fixation data from one or more subjects, the system comprising: a visual fixation data mapping module configured to generate one or more visual fixation maps based on the visual fixation data, wherein the visual fixation data comprises information on whether each location in one or more images viewed by the one or more subjects is a fixation location or a nonfixation location; a sampling module for selection of a first plurality of locations in the one or more images and a second plurality of locations in the one or more images, wherein each location in the first plurality of locations is a fixation location and each location in the second plurality of locations is a nonfixation location; a feature detection module comprising a plurality of feature channels, wherein each feature channel detects a particular feature in the one or more images; a map generating module for map construction for each feature channel based on features detected by the feature detection module for each feature channel, wherein the map generating module is adapted to receive information for each feature channel from the feature detection module and is adapted to output a plurality of relevant maps, and wherein a feature value at each location in a particular relevant map is based on features detected by a feature channel associated with the particular relevant map; a map selecting module for selection of one or more relevant maps from among the plurality of relevant maps based on classification errors, wherein a classification error occurs when a relevant map indicates that a particular location in the first plurality of locations is a fixation location or when a relevant map indicates that a particular location in the second plurality of locations is a nonfixation location, and wherein the selected relevant maps are associated with minimum weighted classification errors, wherein each classification error is multiplied with the sample weight associated with the particular location to form a weighted classification error; sample weighting module for iterative setting and adjustment of sample weights, wherein a weight initially applied to each location in the first and second plurality of locations is a common value, and wherein the weight applied to each location in the first and second plurality of locations is adjusted based on the classification errors associated with the one or more selected relevant maps; a relevance weighting module for relevant weight calculation based on the classification errors associated with the one or more selected relevant maps; and a saliency model constructing module for saliency model construction based on the relevance weights and the one or more selected relevant maps. A computer readable medium comprising computer executable software code stored in the computer readable medium can be executed to carry out the method provided in each of the first and second aspects of the disclosure. The methods and systems herein described can be used in connection with any applications wherein generating of saliency models is desired. The details of one or more embodiments of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims. BRIEF DESCRIPTION OF DRAWINGS The accompanying drawings, which are incorporated into and constitute a part of this specification, illustrate one or more embodiments of the present disclosure and, together with the description of example embodiments, serve to explain the principles and implementations of the disclosure. FIG. 1 shows a computational architecture of an Itti-Koch model, which is a computational saliency model that comprises color, intensity, and orientation feature channels, with an additional face feature channel. FIGS. 2A-2C provide illustrations of how feature weights affect final saliency maps. FIG. 2A shows an input image with lines indicating eye movement across the input image and circles indicating fixation locations in the input image. FIGS. 2B and 2C show a saliency map constructed using a linear combination with equal weights and optimal weights, respectively, for each of color, intensity, orientation, and face feature channels. FIGS. 3A and 3B show an illustration of an AdaBoost based saliency model, which comprises a training stage and a testing stage. Specifically, FIG. 3A shows the training stage while FIG. 3B shows the testing stage. FIG. 4A shows an input image used to acquire human fixation data. FIG. 4B shows a fixation map obtained from the human fixation data using the input image of FIG. 4A . FIG. 4C shows conspicuity maps for each of color, intensity, orientation, and face feature channels. FIG. 5A shows three images used to obtain saliency maps. FIG. 5B shows a recorded scanpath when a subject views an image. FIGS. 5C and 5D show saliency maps obtained using linear integration and nonlinear integration, respectively. FIG. 6A-6B are schematic illustrations of a system for generating a saliency model from one or more images. DETAILED DESCRIPTION Inspired by the primate visual system, computational saliency models decompose visual input into a set of feature maps across spatial scales. Some representative features of visual input utilized by saliency models include, but are not limited to, color, intensity, orientation, motion, flicker, and faces. Other features, by way of example and not of limitation, include texture, depth, contrast, size, aspect ratio, gist, text, and pedestrian. Additional features are definable and identifiable by a person skilled in the art. As used in the present disclosure, the term “free-viewing” refers to looking at a visual scene (e.g., an image) without looking for anything in particular. The terms “fixation data” and “gaze data” can be used interchangeably, and, in the present disclosure, refer to data obtained from free-viewing of a visual scene. Although fixation data is obtained from human subjects in the present disclosure, they can also be obtained from other animals. The terms “non-activated” and “nonfixated” can be used interchangeably and the terms “activated” and “fixated” can be used interchangeably. The term “salient” or “visually salient” refers to parts of a visual scene that elicits a strong, rapid, and automatic response from a subject, independent of tasks being performed by the subject. “Strength” or “weight” of a location refers to saliency of the particular location, where higher strength corresponds with higher saliency. A “saliency value” is a prediction of saliency, generated by a saliency model, at a particular location. At any time, a currently strongest location in the saliency map corresponds to the most salient object. The term “feature channel” refers to a component of a saliency map model that takes into consideration a particular feature in a visual scene. For example, color found at a particular location in the visual scene will be detected and quantified using one or more color channels while brightness at a particular location in the visual scene will be detected and quantified using one or more intensity channels. Each feature can be detected using a feature detection module that comprises a plurality of feature channels. A “feature map” can refer to any one of a “raw feature map”, “center-surround map”, and “conspicuity map”, each of which will be discussed later in the present disclosure. The term “conspicuity map” refers to a final feature map of a particular feature channel. The term “saliency map” refers to a final feature map that integrates some or all feature maps of a visual scene. The saliency map displays saliency of all locations in the visual scene and is the result of competitive interactions among the conspicuity maps for features such as color, intensity, orientation, and faces that interact within and across each conspicuity map. A value in the saliency map represents local saliency of any one location with respect to its neighboring locations. Integration of all the conspicuity maps in the visual scene yields the saliency map of a particular visual scene. The obtained saliency map can be applied to predict locations where people will look at and not look at, also referred to as fixation locations and nonfixation locations, respectively, in other visual scenes. The term “learning” refers to machine learning of a model through experimentally obtained information. Specifically, in the present disclosure, saliency models are learned through fixation data obtained from one or more subjects free-viewing one or more images. Extent to which bottom-up, task-independent saliency maps predict locations at which the subjects fixate their visual attention under free-viewing conditions remains under active investigation (see reference [3], [10], and [11], incorporated herein by reference in their entireties). In some studies, bottom-up saliency has also been adopted (see references [30], [41], and [43], incorporated herein by reference in their entireties) to mimic top-down approaches. Since top-down approaches take into consideration goals and personal experiences of a subject viewing the visual scene, top-down approaches generally have larger inter-subject variability than bottom-up approaches. The present disclosure considers task-independent scrutiny of images when people are free-viewing a visual scene. Suggestions were made concerning incorporation of higher-order statistics to fill gaps between predictive power of saliency map models and their theoretical optimum (see references [8] and [39], incorporated herein by reference in its entireties). Higher-order statistics refers to those statistics of higher order than first- and second-order statistics (e.g., mean and variance, respectively). According to many embodiments of the present disclosure, another way to fill the gaps is through addition of more semantic feature channels into the saliency map model. Semantic feature channels, which relate to higher-level features of the visual scene, refer to those features in the visual scene with more meaningful content. Semantic feature channels could include objects (for example, faces, animals, text, electronic devices) and/or events (for example, people talking). Semantic features are in contrast to more abstract features of the visual scene such as color, intensity, and orientation, which are generally considered lower-level features. Incorporation of higher-level features generally improves accuracy in predicting where subjects will fixate their visual attention in a given visual scene (see references [9] and [31], incorporated herein by reference in their entireties). Within each feature channel, various normalization methods have been proposed to integrate multi-scale feature maps into a final saliency map (see reference [7]) referred to as a conspicuity map. It should be noted that the term “integrate” refers to a combination or addition of feature maps, either through linear combination or nonlinear combination. The normalizations are performed across feature maps at different scales of the same feature as well as feature maps across different features. As one example, normalization can involve taking all values across all feature maps corresponding to a particular channel and dividing every value by a largest value. After normalization in this example, the largest value across all feature maps corresponding to a particular channel is 1. As another example, normalization can involve dividing all values across all feature maps corresponding to a particular channel by a value, where a sum of all the values across all feature maps corresponding to the particular channel is 1 after the division. The focus of these normalization methods is to unify the multi-scale feature maps across different dynamic ranges and extraction mechanisms so that salient objects appearing strongly in a first set of multi-scale feature maps and not other multi-scale feature maps are less likely to be masked by these other multi-scale feature maps. For example, consider a first feature map that contains values between 0 and 1 and a second feature map that contains saliency values, also referred to as saliency scores, between 0 and 100. If the first feature map has salient regions (e.g., saliency values that are closer to 1 than they are to 0) and the second feature map has no salient regions (e.g., saliency values that are closer to 0 than they are to 100), then the salient regions of the first feature map will be more difficult to detect due to the salient regions being masked by the second feature map's determination of no salient regions. It should be noted that in some feature maps, higher values in a feature map correspond with lower saliency while lower values correspond with higher saliency. For example, for a feature map that provides values between 0 and 100, it cannot be assumed that a value close to 0 corresponds with lower saliency and a value close to 100 corresponds with higher saliency. It should be noted that values of feature maps are generally provided by feature extraction/computation modules. As an example, for a color channel pertaining to green, the green component of an RGB representation can be utilized as the feature value (e.g., a value between 0 to 255 for a 24-bit RGB representation). As another example, feature values for a contrast spatial feature channel can be computed based on absolute values for mean gray levels of a particular region and neighboring regions that share a 4-connected border with the particular region. Feature values provided by the modules depend on implementation of the modules, and thus typically feature values provided by one module differ from feature values provided by another module. For instance, for an intensity channel associated with a feature extraction/computation module, values of a feature map can range from 0 to 25. For an intensity channel associated with another module, values of a feature map can range from 0 to 100. Generally, linear integration of the feature channels is used to form a final saliency map (see references [6], [9], [12], and [14], incorporated herein by reference in their entireties). Linear integration of the feature channels is generally simple to apply and has some psychophysical support (see reference [15], incorporated herein by reference in its entirety), although linear integration does not have neurophysiological support. However, psychophysical arguments have been raised against linear integration methods (see references [28] and [40], incorporated herein by reference in their entireties). In addition, prior work (see references [26] and [33], incorporated herein by reference in their entireties) has been aware of the different weights, also referred to as strengths, contributed by different features to perceptual salience. In general, as more feature channels are used, more accurate saliency models can be learned. However, a tradeoff exists between computational cost and accuracy, both of which generally increase with more feature channels. It should be noted that the final saliency map obtained through integration differs from a conspicuity map. The conspicuity map is a saliency map for a particular feature channel whereas the final saliency map obtained through integration is an integration of some or all of the conspicuity maps and possibly other feature maps (e.g., raw feature maps and center-surround maps). A discussion of generating the final saliency map for representing a learned saliency model can be found later in the present disclosure. Some studies propose different criteria to quantify saliency, such as “surprise” (see reference [12]) and “self-information” (see reference [13], incorporated herein by reference in its entirety). Other studies propose a graph based method (see reference [14]) for generating activation maps within each feature channel. Lacking sufficient neurophysiological evidence involving neural instantiation of feature integration mechanisms for saliency, studies have been performed using human eye tracking data to study feature integration methods. Human eye fixation data is obtained by presenting to the subjects many images (for example, hundreds or thousands of images) and asking the subjects to free-view the images. Based on the fixation data, a mapping module can map a location in each input image to a location in a fixation map. An eye tracker measures eye positions and movements during these viewing tasks. For any given subject, images at which the subject's eyes were fixated during the viewing are referred to as fixation locations while image locations at which the subject did not fixate are referred to as nonfixation locations. Generally, human eye fixation data is represented by a two-dimensional fixation map that provides information on where (and sometimes provides information on when) subjects fixate their eyes at particular locations in one or more images, referred to as recorded fixations. An exemplary threshold used for classifying a location as either fixated or nonfixated is a time duration of fixation of 200 ms. In other words, a duration of fixation at a particular location of more than 200 ms would classify that particular location as a fixation location. An eye tracking database comprises the fixation data and the set of images utilized to generate said fixation data. Generally, the fixation data, for each image and each participating subject, is represented by a two-dimensional vector containing locations (in the image coordinate) of all fixations (and nonfixations) made within a certain period (e.g., 2-4 seconds). In reference [42] (incorporated herein by reference in its entirety), weighing of different conspicuity maps, also sometimes referred to as importance maps, based upon eye movement studies. Images are segmented into regions, and weights of different features are estimated by counting fixations in these regions. Fixations are counted by calculating percentage of fixations that landed on regions classified by a particular feature as most important. “Most important regions” refer to those regions with the highest saliency values. For example, for all fixation data of M subjects, consider that 10% of fixations fall in regions classified as most important regions when color is used for classification and 5% of fixations fall in regions classified as most important regions when size is used for classification. In this case, a weight associated with color would be twice a weight associated with size. Applicants provide learning based methods that use more principled learning techniques to derive feature integration and are closely related to two recent works that also learned saliency models from fixation data (see references [36] and [37], incorporated herein by reference in their entireties). In reference [37], a saliency model learned from raw image patches nonlinearly maps a patch to a real number indicating fixated or nonfixated locations. As used in reference [37], a patch refers to a square region of an image. This is in contrast to the term “region”, which generally can be of any shape. Determining size and resolution of the patches is generally not straightforward and a tradeoff is made between computational tractability and generality. In addition, a high dimensional feature vector of an image patch requires a large number of training samples. Both references [36] and [37] apply support vector machines to learn saliency models. It should be noted that “raw data” refers to image data to which no feature extraction step has been applied. In other words, raw data can refer to the image data free of any assumptions, since extraction of certain features from raw images introduces and implies assumptions made about which features would capture relevant information (e.g., information pertaining to salience of particular locations in an image). However, learning saliency models from raw data does not generally involve a feature extraction step and thus can be computationally costly due to high dimensional spaces generally associated with an image since even one pixel in the image has multiple feature dimensions. Consequently, many saliency models involve a feature extraction step first performed on a raw image to generate one or more maps, and the saliency model is learned based on these one or more maps (as opposed to directly based on the raw images). Feature extraction is essentially a dimension reduction process. In reference [36], a large eye tracking database was constructed and a saliency model based on low-, mid-, and high-level image features was learned. Classification of features into one of low-, mid-, and high-level can be defined differently by different researchers. As previously mentioned, low-level features can refer to color, intensity, and orientation while high-level features can refer to objects (such as faces, pedestrians, and text) and events (such as people talking). An example of a mid-level feature, given in reference [36], can be a horizontal detector, which assumes most objects rest on the surface of the Earth, trained from a gist feature that captures context information and holistic characteristics in an image. The gist feature describes a vague contextual impression. In many embodiments of the present disclosure, a data-driven approach using human fixation data has been used to study feature integration methods. A first model retains basic structure of a standard saliency model by using a linear integration method that considers bottom-up feature channels. This first model is compatible with literature on psychophysics and physiology, which provide support for a linear integration of different features in a visual scene, and is generally easier to implement than nonlinear integration methods. In a second model, an AdaBoost (adaptive boosting) based model is used that combines an ensemble of sequentially learned models to model complex input data. In this second model, the ensemble approach allows each weak classifier, also referred to as a base model, to cover a different aspect of the data set (see reference [35], incorporated herein by reference in its entirety). In addition, unlike reference [37], which uses raw image data, and reference [36], which includes a set of ad-hoc features, the features of the present disclosure are bottom-up biologically plausible ones (see references [1] and [6]). “Ad-hoc” features generally refer to any features implemented into a model based on empirical experience of a researcher concerning a particular application associated with the model. With respect to saliency models, ad-hoc features implemented into the saliency models are based on empirical experience of a researcher and generally does not take into consideration biological processes related to vision of humans and other animals. “Biologically plausible” features refer to those designed according to biological systems, essentially mimicking visual processes (as well as physical and mental processes related to visual processes) of humans and other animals. Saliency models built around biologically plausible features work under an assumption that since humans and many other animals generally have higher capacity for saliency detection than machines, mimicking biological processes of humans and other animals can lead to accurate, broadly applicable saliency models. FIG. 1 shows a computational architecture of an Itti-Koch model (see reference [6]), which is a computational saliency model that comprises three low-level features (color, intensity, and orientation), with an additional face feature channel. Specifically, the original Itti-Koch model (without the additional face feature channel) includes two color channels (blue/yellow and red/green), one intensity channel, and four orientation channels (0°, 45°, 90°, 135°). Raw feature maps, also referred to as raw maps, of different spatial scales are created for each feature channel using successive low-pass filtering followed by downsampling ( 105 ). In FIG. 1 , each dyadic Gaussian pyramid ( 105 ) performs the successive low-pass filtering and downsampling of an input image ( 100 ). Raw feature maps (for example, 107 , 108 ) of nine spatial scales (0-8) are created for each feature by applying dyadic Gaussian pyramids ( 105 ) to raw data. A spatial scale of 0 yields an output image of the same resolution (i.e., ½ 0 times the resolution of the input image) as that of the input image ( 100 ). A spatial scale of 3 yields an output image of resolution ½ 3 times the resolution of the input image ( 100 ). Consequently, a lower spatial scale is finer than a higher spatial scale. A scale 1 map can be obtained by applying a dyadic Gaussian pyramid ( 105 ) to a scale 0 map, a scale 2 map can be obtained by applying a dyadic Gaussian pyramid ( 105 ) to a scale 1 map, and so forth. Although number of spatial scales used can be arbitrarily big, nine spatial scales are generally sufficient to capture information at different resolutions for many input images. Center-surround difference maps are then constructed ( 110 ) as point-wise differences across spatial scales, or equivalently point-wise differences between fine and coarse scales of the input image ( 100 ) as obtained from applying the dyadic Gaussian pyramids ( 105 ) to the input image ( 100 ), to capture local contrasts. Specifically, six center-surround difference maps (for example, 112 , 113 ) are then constructed ( 110 ) as point-wise differences across spatial scales using a center level c={2, 3, 4} and surround level s=c+δ, where δ={2,3} for each feature channel. Consequently, with two color channels, one intensity channel, and four orientation channels, there are 42 center-surround difference maps. Each set of six center-surround difference maps is given by 2-4, 2-5, 3-5, 3-6, 4-6, and 4-7, where, for example, 2-4 denotes the point-wise difference across spatial scales 2 and 4. It should be noted that the values for the center level and the surround level, which lead to the number of center-surround difference maps, are based on an original implementation of the Itti-Koch model. Other values for the center and surround level can be used. A single conspicuity map (for example, 117 , 118 ) is obtained for each of the feature channels through utilization of within-feature competition ( 115 ) of one or more sets of six center-surround difference maps, and is represented at spatial scale 4. A spatial scale of 4 is used since it is between the available spatial scales of 0 and 8. Specifically, in the implementation shown in FIG. 1 , the within-feature competition ( 115 ) is performed using across-scale addition. Across-scale addition refers to addition of the six center-surround difference maps as provided by sum[(2-4), (2-5), (3-5), (3-6), (4-6), (4-7)] to obtain an intermediate map. Weight applied to each of the six center-surround difference maps is generally, but not necessarily, equal. As previously mentioned, each conspicuity map corresponds with a particular feature channel utilized in the Itti-Koch model. To obtain a conspicuity map corresponding with a feature channel, the intermediate maps obtained through across-scale addition are summed together. For instance, since the original Itti-Koch model includes one intensity channel, the intermediate map, obtained from adding one set of six center-surround difference maps, is the conspicuity map for the intensity feature channel. On the other hand, since the original Itti-Koch model includes four orientation channels, the four intermediate maps, each intermediate map obtained from adding one set of six center-surround difference maps associated with one orientation channel, are added together to form the conspicuity map for the orientation feature channel. Weight applied to each intermediate map is also generally, but not necessarily, equal. Each of the feature maps presented above (e.g., raw feature maps, center-surround maps, and conspicuity maps) can be generated using one or more map generating modules. Each map generating module can be implemented to generate one type of feature map or multiple types of feature maps using the processes provided above. Samples are used to learn optimal weights for the feature channels. In a first embodiment of the present disclosure, each image location x ( 130 ) in an image serves as a sample that is utilized in learning optimal weights for the feature channels. Each sample comprises a feature vector of a particular location x ( 130 ) and a value of that particular location x ( 130 ) in a fixation map (taken to be ground truth). Relevant maps, which refer to maps utilized in evaluating salience, include all conspicuity maps. It should be noted that a plurality of input images ( 100 ) and/or a plurality of subjects can be used to obtain samples. Specifically, an image location x ( 130 ) refers to a location within any input image in the plurality of input images ( 100 ). In a second embodiment of the present disclosure, a sample generation process involves, for each image location x ( 130 ) in the input image ( 100 ), extracting values (shown as rightward pointing arrows in FIG. 1 ) of relevant maps at the particular image location ( 130 ) and stacking the values to obtain the sample vector f(x). Relevant maps can take into account one or more of feature maps across different spatial scales (for example, 107 , 108 ), center-surround difference maps (for example, 112 , 113 ), and/or conspicuity maps (for example, 117 , 118 ). A final saliency map ( 125 ) is obtained by applying the optimal weights to some or all of the relevant maps and performing integration ( 120 ) of the weighted relevant maps. For the Itti-Koch saliency model, linear integration is utilized. The optimal weights obtained can generally be used to define a saliency model used for prediction of fixation and nonfixation locations of different visual scenes. Similar to the first embodiment, it should be noted that a plurality of input images ( 100 ) and/or a plurality of subjects can be used to obtain the sample vector f(x). Specifically, an image location x ( 130 ) refers to a location within any input image in the plurality of input images ( 100 ). According to many embodiments of the present disclosure, a saliency model is built upon the Itti-Koch saliency model, where an additional face channel (a high-level feature) is added to the Itti-Koch saliency model (see references [6] and [9]), as shown in FIG. 1 . The face channel is an independent feature channel of a computational saliency model that performs face detection over an input visual scene. The conspicuity map of the face channel can be generated by a face detector, such as the face detector provided in reference [46], incorporated herein by reference in its entirety. The saliency model uses an optimized set of weights learned from human fixation data for weighting some or all of the relevant maps, possibly including weighting of the conspicuity map of the face channel, of a particular saliency model prior to integration ( 120 ). It should be noted that values in a conspicuity map are normalized to fall within [0,1] and weights applied to conspicuity maps are normalized to sum to 1. Consequently, saliency values in the final saliency map is also in the range of [0,1]. With continued reference to FIG. 1 , according to an embodiment of the present disclosure, learning of a saliency model is directly from output of the spatially organized feature maps from the input images and fixation maps constructed from the training data. “Spatially organized” refers to each conspicuity map having a spatial layout and locations corresponding to those in the input image ( 100 ). The learning of the saliency model does not involve assumptions or explicit modeling of data patterns such as modeling inherent structures or analyzing characteristics in the input image ( 100 ). In addition, unlike methods that do not include any subject-specific information, the learning of the saliency model of the embodiments of the present disclosure can be applied to training one saliency model for each subject, thereby providing a principled framework to assess inter-subject variability by taking into account each subject's bias toward certain visual features. In the first embodiment of the present disclosure, a final saliency map is generated through linear integration of each conspicuity map, where each conspicuity map is a relevant map. Specifically, the conspicuity maps are used to construct feature vectors from fixation data to learn optimal weights used in performing linear integration. By using conspicuity maps for each feature, the linear integration involves mainly relevant information for a particular feature channel and estimated feature weights of the particular feature channel are not affected by other feature channels. To quantify relevance of different features in deciding where a human eye would fixate, linear, least square regression with constraints calculated from the human fixation data can be used to learn the feature weights of the different features. Weights applied to the conspicuity maps (or similarly weights applied to the feature channels) can be normalized so as to sum to 1. Additionally, values in a conspicuity map are normalized to fall within [0,1]. Consequently, since the weights are normalized to sum to 1 and the conspicuity map normalized to fall within [0,1], the final saliency map is also in the range of [0,1]. A higher weight for a feature channel signifies that the particular feature channel is more relevant in predicting fixation locations and nonfixation locations. For example, if w F >w C , the feature channels for face are more important in predicting where people are likely to look at (fixation locations and nonfixation locations). Consider that there are two regions (A and B) in an image, where A is salient in color (e.g., possesses traits that lead to high saliency values obtained by the color channel) and B is salient in face information, where saliency values in their respective conspicuity maps (i.e., A in the color conspicuity map and B in the face conspicuity map) are the same. Since the weight of the face channel is larger, the saliency model predicts that people will look at the region with faces before looking at the region that is salient in color. As a sidenote, a region that is salient in color can possess, for instance, bright colors. Since intensity can be, and usually is, calculated as an average of R (red), G (green), and B (blue) values, regions salient in color may also be salient in intensity. As an example of the first embodiment (and as illustrated in FIG. 1 and FIG. 4 ), color, intensity, orientation, and face conspicuity maps are used to construct feature vectors for learning of the saliency model. Let V=[C I O F] denote a matrix comprising stacked feature vectors constructed from the color C, intensity I, orientation O, and face F conspicuity maps; M fix denote a vectorized fixation map of the same spatial resolution as the conspicuity maps; and w=[w C w I w O W F ] T denote weights of the four feature channels. The vectorized fixation map M fix is a one-dimensional map constructed from concatenating each row (or column) of an initial two-dimensional fixation map (generated using fixation data) to form a vector. The initial two-dimensional fixation map is represented as recorded fixations, sampled to be of equal resolution for each of the conspicuity maps (e.g., generally spatial scale 4 for the Itti-Koch model), convolved with an isotropic Gaussian kernel, where an assumption is made that each fixation gives rise to a Gaussian distributed activity. By convolving with an isotropic Gaussian kernel, correspondence between exact fixation location in the image and corresponding image features are smoothed. It should be noted that a fixation location is an exact location in the image. Since it is a location at which subjects fixate, saliency values at fixation locations are usually high and saliency values at locations around the exact location (unless these locations are also fixation locations) are usually lower than those at the exact location. By smoothing the correspondence between exact fixation location and corresponding image features, locations around the exact fixation location can also exhibit high saliency values, which is in accordance with functionality of the eyes of humans and other animals. Additionally, by smoothing the correspondence, effects of noises or deviations can be counteracted since locations around the exact location will have higher saliency scores (relative to the case of no smoothing) and will be less affected by noise and deviations. These deviations refer a difference between an expected fixation location and an actual fixation location. For instance, consider that a particular region at location (x,y) has a maximum saliency value, but a subject looked at a nearby position (x+m,y+n). The m and n are small amounts referred to as deviations. An objective function (which refers to a function for which a maximum or minimum is sought subject to given constraints) for minimization of feature weights is provided by: where V×w is a saliency value, which is a weighted average of the conspicuity maps represented by V and the weights represented by w, and M fix is the vectorized fixation map obtained from human fixation data. The objective function defines a formal equation for finding a set of weights w that minimizes |V×w−M fix | 2 . It should be noted that if one or more input images have X×Y locations total and a spatial scale 4 is utilized, the resulting conspicuity maps would have (X/2 4 )×(Y/2 4 ) locations while the two-dimensional fixation map would be sampled to a resolution with (X/2 4 )×(Y/2 4 ) locations. The matrix V holds stacked feature vectors from each of the four conspicuity maps (color, intensity, orientation, and face) and thus is a [(X/2 4 )(Y/2 4 )]×4 matrix, the vector w is a 4×1 vector, and V×w and M fix are [(X/2 4 )(Y/2 4 )]×1 vectors. Consequently, V×w−M fix is a vector, where each element is a difference between a saliency value at a particular location in the conspicuity map and a value derived at the particular location from the human fixation map. The saliency value is considered a predicted value whereas the value derived from the fixation map is called “human performance” or “ground truth”. In the objective function, the constraint (w≧0 in this case) defines a feasible region. Specifically, weights applied to each of the conspicuity maps (or equivalently weights applied to each feature channel) are non-negative. The objective function can be solved using an active set method (see reference [32], incorporated herein by reference in its entirety). The active set method finds a feasible starting point within the feasible region and then iteratively updates until finding a desired solution. The objective function can also be solved by a greedy search method along the feasible regions, where every possible value in the feasible region is evaluated. Other methods of solving the objective function can be performed with techniques and procedures identifiable by a skilled person. FIGS. 2A-2C provide illustrations of how feature weights affect final saliency maps. FIG. 2A shows an input image with lines indicating saccadic eye movement across the input image and black circles indicating fixation locations in the input image. FIGS. 2B and 2C show a saliency map constructed using a linear combination with equal weights and optimal weights, respectively, for each of the feature channels. It should be noted that in the saliency map of FIG. 2C , higher weights were obtained for the face channel, and thus the fixation locations relating to faces in the input image of FIG. 2A are clearly shown as bright areas (i.e., areas with a high saliency score) in FIG. 2C . In the second embodiment of the present disclosure, to remove the assumption of linear integration of the feature channels, an AdaBoost based method is used for learning saliency models. The AdaBoost method is a special case of boosting, where boosting refers to a machine learning technique for supervised learning. Supervised learning refers to inferring a function, also referred to as a classifier or a model, from training data. The AdaBoost method can be utilized for constructing a saliency model involving nonlinear integration of conspicuity maps (see references [17], [18], [19], and [20], incorporated herein by reference in their entireties). In applying the AdaBoost method, number of boosting rounds can be tuned. With reference back to FIG. 1 , in the second embodiment of the present disclosure, the relevant maps include intermediate maps (raw maps (for example, 107 , 108 ) and center-surround difference maps (for example, 112 , 113 )) as well as the conspicuity maps (for example, 117 ). Each of the relevant maps is used to form the sample vector f(x) at each location x. A final saliency map ( 125 ) is obtained by a product between a set of learned weights and a set of learned weak classifiers. Integration ( 120 ) involves feature selection, thresholding, weight assignment, and feature combination, each of which will be discussed later in the disclosure. In the second embodiment, the sample vector is given by f ( x )=[ f 1 ( x ) f 2 ( x ) . . . f nf ( x ) f nf+1 ( x ) . . . f nf+ncs+1 ( x ) . . . f nf+ncs+nc ( x )], where subscript “nf” denotes number of raw feature maps, “ncs” denotes number of center-surround maps, and “nc” denotes number of conspicuity maps. Each element f n (x) in the sample vector is a scalar quantity, where subscript n denotes a number assigned to a map. Although the numbering can be arbitrary, an example of construction of the sample vector is given as follows. In FIG. 1 , f 1 (x) corresponds with a value in a first raw feature map ( 107 ), f 2 (x) corresponds with a value in a second raw feature map ( 108 ), f nf (x) corresponds with a value in a last raw feature map ( 109 ), f nf+1 (x) corresponds with a value in a first center-surround map ( 112 ), f nf+ncs+1 (x) corresponds with a value in a first conspicuity map ( 117 ), and f nf+ncs+nc (X) corresponds with a value in a last conspicuity map ( 118 ). As a particular example, with further reference to the Itti-Koch model, where three types of feature channels (color, intensity, and orientation) are utilized, the stacked vector f(x) at each location x contains information from 87 maps (nf=42 raw feature maps, ncs=42 center-surround maps, and nc=3 conspicuity maps). In the Itti-Koch model with an additional face feature channel, each location x contains information from 88 maps (42 raw feature maps, 42 center-surround maps, and 4 conspicuity maps). The functions to be inferred are weak classifiers g t and their weights α t . A weighted combination of iteratively learned weak classifiers (also referred to as base classifiers or base models), denoted as g t (f), are used to infer a strong classifier, denoted as G(f). For subject-specific saliency models, only fixation data from one subject is used to infer the weak classifiers and weights associated with the weak classifiers. These weak classifiers and weights are then used to define the strong classifier. For general saliency models (constructed to be applicable to a general population), fixation data from all subjects are used to infer the weak and strong classifiers as well as the weights. A weak classifier refers to a classifier that is slightly better than a random guess and is usually of simple form, such as a linear classifier. A “guess” refers to an initial classification of each of the samples. A “random guess” arbitrarily assigns each of the samples to one of the two classes (i.e., 50% nonfixation and 50% fixation). A strong classifier obtained through learning of the weak classifiers and the weights corresponding to each of the weak classifiers generally performs better in predicting whether a particular sample corresponds to a nonfixation location or a fixation location. For learning saliency models, the AdaBoost method generally involves two classes: a class for nonfixation locations (generally denoted as −1) and a class for fixation locations (generally denoted as a 1). Through each iteration of the AdaBoost method, weak classifiers are adjusted in favor of the misclassified samples, which refer to sample locations in the image being incorrectly classified as a fixation or a nonfixation location (using fixation data as ground truth). Specifically, after each weak classifier is trained, all data samples are reweighted such that samples misclassified by previously trained weak classifiers are weighted more. Subsequently trained weak classifiers will place more emphasis on these previously misclassified samples. A method for learning a nonlinear feature integration using AdaBoost is provided as follows. The method comprises a training stage and a testing stage. In the training stage, the method takes as input a training dataset with N images and human eye tracking data from M subjects. In the testing stage, the method takes as input a testing image Im. In the second embodiment, the Adaboost method is used to learn a nonlinear integration of relevant maps by generating a one-dimensional real number from a feature vector in the d-dimensional space G(f): R d →R, where d is the dimension of the feature vector f(x) (or equivalently the number of relevant maps being added together). Formally, where g t (·) denotes a weak classifier where g t (f)ε{−1,1}, G(·) is a final saliency model where G(f)ε[0,+∞), α t is a weight applied to g t (·), and T is a total number of weak classifiers considered in the learning process. The total number of weak classifiers, denoted as T, is pre-defined depending on number of features (e.g., relevant maps) desired to be selected from the feature pool. It should be noted that each image location x corresponds to a feature vector f(x). As previously mentioned, the weak classifier g t (f) can be −1 or 1, where −1 predicts that location x is a nonfixation location while 1 predicts that the location x is a fixation location. As an example, in the Itti-Koch model with an additional face feature channel, the sample vector f(x) is an 88×1 vector. The weak classifier g t (f) takes the sample vector and maps it to a value of −1 or 1. Information from each of the channels (including but not limited to color, intensity, orientation, and face) contained in the feature vector are used to obtain g t and α t for each of the samples from one or more images. Instead of taking polarity of the AdaBoost output, as conventionally used to solve classification problems, the present embodiment utilizes real values of the strong classifier G(f) to form a saliency map. When predicting where people will look at and not look at, a particular location can be classified as a fixation location and a nonfixation location. In a binary situation, a location can be a fixation location (e.g., ‘1’) or a nonfixation location (e.g., ‘−1’). By using real numbers, additional information of each fixation location and each nonfixation location can be provided. For example, consider that a prediction of first ten fixation locations is desired. With a saliency map of real numbers instead of just 1 and −1 (or any other set of binary values), prediction of which locations will be fixated and in what order each of these predicted fixated locations will be fixated can be achieved. In the training stage, from eye tracking data of M subjects viewing N images, the following is performed: Step 1. From all locations in N images, a subset of S locations, referred to as samples and denoted as x s , is selected, where s={1, 2, . . . , S}. Each sample has a corresponding label y s , where y s =1 for a positive sample (i.e., a sample from a fixation location) and y s =−1 for a negative sample (i.e., a sample from a nonfixation location). The labels are provided by the human fixation data. The human fixation data provides the ground truth, which is considered the correct classification of a sample as a fixation or nonfixation location, and is used to train the weak classifiers. Both positive and negative samples are selected from the human fixation map. For positive samples, locations sampled depend on values of the fixation maps. For example, if the value of a location A is twice that of location B in the fixation map, probability of A being sampled is twice the probability of B being sampled. All locations of 0 value have zero probability of being sampled as a positive sample. For negative samples, uniform sampling of all locations with a 0 value in the human fixation map is performed. Number of positive samples and negative samples is specified by a person building the saliency model. The number of positive samples and negative samples is generally chosen to be the same and is generally in the thousands over a set of input images (and in the tens for each input image). From the sampling, a feature vector at a location x s is computed and denoted as f(x s ) (or simply f s ). A sampling module can be utilized to select these positive and negative samples. Step 2. Initialize weights of each sample to be The weights applied to each sample will be adjusted during Step 3 of the training stage. Step 3. For t=1, . . . T, where T denotes number of weak classifiers to be trained (generally pre-defined), each of Step 3.a), 3.b), and 3.c) are performed. In each iteration, one weak classifier g t and its corresponding weight α t are learned. Step 3.a) Train a weak classifier g t : R d →{−1,1}, which minimizes the weighted error function where ε u refers to classification error sought to be minimized, G refers to the set of all feature dimensions, and [y s ≠g u (f s )] is an indicator function that is equal to 0 if y s =g u (f s ) and equal to 1 if y s ≠g u (f s ). Specifically, for a sample s, if its label y s is different from an output of the weak classifier g u (f s ) corresponding to sample s, then [y s ≠g u (f s )] is 1 and the classification error g t increases by w t (s). For every feature dimension (g u εG), a classification error is computed at all possible pre-defined thresholds. A “best” relevant map, also referred to more simply as a “best” feature, is a relevant map associated with the minimum classification error in a given iteration of Step 3. When evaluating each relevant map, error for the relevant map is obtained across all possible pre-defined thresholds. A threshold refers to a value that serves as a cutoff from which a sample is predicted to be nonfixated of fixated. Specifically, values in a sample can be normalized to values between 0 and 1. The threshold candidates are a plurality of values (e.g., any value between 0 to 1 in increments of 0.1 or 0.01). For each threshold value, two directions are searched. As an example, consider a threshold m. With a threshold of m and for a particular feature dimension, if a sample with a value x satisfies x>m, then the weak classifier outputs g t =1. Otherwise, the weak classifier outputs g t =−1. In the other direction for the same threshold value and feature dimension, if a sample with a value x satisfies xproviding visual fixation data from one or more subjects, wherein the visual fixation data comprises information on whether each location in the one or more images viewed by the one or more subjects is a fixation location or a nonfixation location;\n
constructing one or more visual fixation maps based on the visual fixation data;\n
providing a plurality of feature channels, wherein each feature channel in the plurality of feature channels detects a particular feature in the one or more images;\n
constructing a plurality of conspicuity maps, wherein each conspicuity map in the plurality of conspicuity maps is associated with one feature channel in the plurality of feature channels, and wherein a particular conspicuity map is constructed based on features detected by a feature channel associated with the particular conspicuity map; and\n
generating a plurality of weights, wherein the plurality of weights is a function of the one or more visual fixation maps and the plurality of conspicuity maps, thus generating the saliency model,\n
wherein each weight in the plurality of weights is associated with one conspicuity map in the plurality of conspicuity maps."],"number":1,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 1, wherein the generating a plurality of weights comprises:\n
applying each weight in the plurality of weights to a conspicuity map associated with said each weight in the plurality of weights to obtain a plurality of weighted conspicuity maps,\n
computing a difference between the weighted conspicuity maps and the one or more visual fixation maps; and\n
adjusting each weight in the plurality of weights based on the difference, iteratively performing the applying, the computing, and the adjusting until the difference between the weighted conspicuity maps and the one or more visual fixation maps is minimum, thus generating the plurality of weights."],"number":2,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 1, wherein the generating a plurality of weights is performed using an active set method."],"number":3,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 1, wherein each weight is nonnegative."],"number":4,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 1, wherein the constructing one or more visual fixation maps comprises:\n
generating one or more initial fixation maps from the visual fixation data;\n
convolving each of the one or more initial fixation maps with an isotropic Gaussian kernel to form the one or more visual fixation maps, wherein information from the one or more visual fixation maps is contained within a vectorized fixation map Mfix."],"number":5,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 1, wherein the constructing one or more visual fixation maps comprises:\n
generating one or more initial fixation maps from the visual fixation data;\n
convolving each of the one or more initial fixation maps with an isotropic Gaussian kernel to form the one or more visual fixation maps, wherein information from the one or more visual fixation maps is contained within a vectorized fixation map Mfix;\n
and wherein the constructing a plurality of conspicuity maps comprises:\n
generating, for each conspicuity map, a feature vector based on features detected by a feature channel associated with the conspicuity map in the one or more images; and\n
constructing a conspicuity matrix V, wherein each column or row of the conspicuity matrix is a feature vector, and\n
wherein the generating a plurality of weights comprises generating a vector of weights w by solving a linear regression equation\n\nargminwV×w-Mfix2."],"number":6,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 1, wherein the constructing a plurality of conspicuity maps comprises, for at least one of the feature channels:\n
down-sampling the one or more images to obtain raw feature maps, wherein the raw feature maps comprise maps associated with a particular feature channel at different spatial resolutions;\n
calculating differences between two maps associated with the particular feature channel at different spatial resolutions to generate a plurality of center-surround difference maps; and\n
performing across-scale addition by summing the plurality of center-surround difference maps to obtain a conspicuity map associated with the particular feature channel."],"number":7,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 1, wherein each feature channel is selected from the group consisting of a color feature channel, an intensity feature channel, an orientation feature channel, and a face feature channel."],"number":8,"annotation":false,"claim":true,"title":false},{"lines":["A method for generating a saliency map for an input image, comprising:\n
generating a saliency model using the method of claim 1; and\n
applying the saliency model at each location in the input image to generate the saliency map,\n
wherein the saliency map provides a prediction of whether a particular location in the input image, when viewed by one or more subjects, will be a fixation location or a nonfixation location."],"number":9,"annotation":false,"claim":true,"title":false},{"lines":["A method for generating a saliency model from one or more images, the saliency model adapted to predict whether a particular location in an image is a fixation location or a nonfixation location, the method comprising:\n
providing visual fixation data from one or more subjects, wherein the visual fixation data comprises information on whether each location in the one or more images viewed by the one or more subjects is a fixation location or a nonfixation location;\n
selecting a first plurality of locations in the one or more images, wherein each location in the first plurality of locations is a fixation location;\n
selecting a second plurality of locations in the one or more images, wherein each location in the second plurality of locations is a nonfixation location;\n
setting sample weights applied to each location in the first and second plurality of locations to a common value;\n
providing a plurality of feature channels, wherein each feature channel detects a particular feature in the one or more images;\n
generating a plurality of relevant maps for each feature channel based on features detected by each feature channel, wherein a feature value at each location in a particular relevant map is based on features detected by a feature channel associated with the particular relevant map;\n
selecting a relevant map from among the plurality of relevant maps that minimizes weighted classification errors for the first and second plurality of locations, wherein a classification error occurs when a relevant map indicates that a particular location in the first plurality of locations is a nonfixation location or when a relevant map indicates that a particular location in the second plurality of locations is a fixation location, and wherein each classification error is multiplied with the sample weight associated with the particular location to form a weighted classification error;\n
generating a thresholded relevant map for the selected relevant map, wherein a value associated with a particular location in the thresholded relevant map is either a first value or a second value, and wherein the value associated with the particular location is based on the feature value at the particular location in the selected relevant map;\n
calculating a relevance weight applied to the selected relevant map based on the classification errors associated with the selected relevant map;\n
adjusting the sample weights applied to each location in the first and second plurality of locations based on the relevance weight and the classification errors associated with the selected relevant map;\n
iteratively performing the relevant map selecting, the relevance weight calculating, and the sample weight adjusting for a pre-defined number of times; and\n
generating the saliency model based on a sum of a product between each relevance weight and the thresholded relevant map corresponding to the selected relevant map corresponding to the relevance weight."],"number":10,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 10, wherein the generating a plurality of relevant maps comprises:\n
down-sampling the one or more input images to obtain raw feature maps, wherein the raw feature maps comprise maps associated with a particular feature channel at different spatial resolutions and maps associated with different feature channels;\n
calculating differences between two maps associated with the particular feature channel at different spatial resolutions to generate a plurality of center-surround difference maps; and\n
performing across-scale addition by summing the plurality center-surround difference maps to obtain a plurality of conspicuity maps,\n
wherein the relevant maps comprise the raw feature maps, the plurality of centersurround difference maps, and the plurality of conspicuity maps."],"number":11,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 10, wherein the selecting a relevant map comprises, for each relevant map:\n
providing a set of threshold values;\n
classifying each location in a particular relevant map as a fixation location or a nonfixation location based on a comparison between the feature value at each location in the particular relevant map and each threshold value in the set of threshold values; and\n
calculating the classification errors of the particular relevant map based on a threshold value from among the set of threshold values; and\n
selecting the relevant map and the threshold value that minimizes the classification errors."],"number":12,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 12, wherein the generating a thresholded relevant map comprises, for each location in the thresholded relevant map: setting a value associated with a particular location based on comparing a feature value at the particular location in the selected relevant map and the selected threshold value."],"number":13,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 10, wherein the adjusting the sample weights comprises increasing weight of samples associated with a classification error and/or decreasing weight of samples not associated with a classification error."],"number":14,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 10, wherein each feature channel is selected from the group consisting of a color feature channel, an intensity feature channel, an orientation feature channel, and a face feature channel."],"number":15,"annotation":false,"claim":true,"title":false},{"lines":["A method for generating a saliency map for an input image, comprising:\n
generating a saliency model using the method of claim 10; and\n
applying the saliency model at each location in the input image to generate the saliency map,\n
wherein the saliency map provides a prediction of whether a particular location in the input image, when viewed by one or more subjects, will be a fixation location or a nonfixation location."],"number":16,"annotation":false,"claim":true,"title":false},{"lines":["A system for generating a saliency model from one or more images, the saliency model adapted to predict whether a particular location in an image is a fixation location or a nonfixation location, the system adapted to receive visual fixation data from one or more subjects, the system comprising:\n
a visual fixation data mapping module configured to generate one or more visual fixation maps based on the visual fixation data, wherein the visual fixation data comprises information on whether each location in the one or more images viewed by the one or more subjects is a fixation location or a nonfixation location;\n
a feature detection module comprising a plurality of feature channels, wherein each feature channel detects a particular feature in the one or more images;\n
a conspicuity map generating module for conspicuity map construction for each feature channel based on features detected by the feature detection module for each feature channel, wherein the conspicuity map generating module is adapted to receive information for each feature channel from the feature detection module and is adapted to output a plurality of conspicuity maps;\n
a weight computation module for generation of a plurality of weights, wherein the plurality of weights is a function of the one or more visual fixation maps from the visual fixation data mapping module and the plurality of conspicuity maps from the conspicuity map generating module, thus generating the saliency model,\n
wherein each weight in the plurality of weights is associated with one conspicuity map in the plurality of conspicuity maps."],"number":17,"annotation":false,"claim":true,"title":false},{"lines":["A system for generating a saliency model from one or more images, the saliency model adapted to predict whether a particular location in an image is a fixation location or a nonfixation location, the system adapted to receive visual fixation data from one or more subjects, the system comprising:\n
a visual fixation data mapping module configured to generate one or more visual fixation maps based on the visual fixation data, wherein the visual fixation data comprises information on whether each location in one or more images viewed by the one or more subjects is a fixation location or a nonfixation location;\n
a sampling module for selection of a first plurality of locations in the one or more images and a second plurality of locations in the one or more images, wherein each location in the first plurality of locations is a fixation location and each location in the second plurality of locations is a nonfixation location;\n
a feature detection module comprising a plurality of feature channels, wherein each feature channel detects a particular feature in the one or more images;\n
a map generating module for map construction for each feature channel based on features detected by the feature detection module for each feature channel, wherein the map generating module is adapted to receive information for each feature channel from the feature detection module and is adapted to output a plurality of relevant maps, and wherein a feature value at each location in a particular relevant map is based on features detected by a feature channel associated with the particular relevant map;\n
a map selecting module for selection of one or more relevant maps from among the plurality of relevant maps based on classification errors, wherein a classification error occurs when a relevant map indicates that a particular location in the first plurality of locations is a fixation location or when a relevant map indicates that a particular location in the second plurality of locations is a nonfixation location, and wherein the selected relevant maps are associated with minimum weighted classification errors, wherein each classification error is multiplied with the sample weight associated with the particular location to form a weighted classification error;\n
a sample weighting module for iterative setting and adjustment of sample weights, wherein a weight initially applied to each location in the first and second plurality of locations is a common value, and wherein the weight applied to each location in the first and second plurality of locations is adjusted based on the classification errors associated with the one or more selected relevant maps;\n
a relevance weighting module for relevant weight calculation based on the classification errors associated with the one or more selected relevant maps; and\n
a saliency model constructing module for saliency model construction based on the relevance weights and the one or more selected relevant maps."],"number":18,"annotation":false,"claim":true,"title":false},{"lines":["A non-transitory computer readable medium comprising computer executable software code stored in said medium, which computer executable software code, upon execution, carries out a method for generating a saliency model from one or more images, the saliency model adapted to predict whether a particular location in an image is a fixation location or a nonfixation location, the method comprising:\n
providing visual fixation data from one or more subjects, wherein the visual fixation data comprises information on whether each location in the one or more images viewed by the one or more subjects is a fixation location or a nonfixation location;\n
constructing one or more visual fixation maps based on the visual fixation data;\n
providing a plurality of feature channels, wherein each feature channel in the plurality of feature channels detects a particular feature in the one or more images;\n
constructing a plurality of conspicuity maps, wherein each conspicuity map in the plurality of conspicuity maps is associated with one feature channel in the plurality of feature channels, and wherein a particular conspicuity map is constructed based on features detected by a feature channel associated with the particular conspicuity map; and\n
generating a plurality of weights, wherein the plurality of weights is a function of the one or more visual fixation maps and the plurality of conspicuity maps, thus generating the saliency model,\n
wherein each weight in the plurality of weights is associated with one conspicuity map in the plurality of conspicuity maps."],"number":19,"annotation":false,"claim":true,"title":false},{"lines":["A non-transitory computer readable medium comprising computer executable software code stored in said medium, which computer executable software code, upon execution, carries out a method for generating a saliency model from one or more images, the saliency model adapted to predict whether a particular location in an image is a fixation location or a nonfixation location, the method comprising:\n
providing visual fixation data from one or more subjects, wherein the visual fixation data comprises information on whether each location in the one or more images viewed by the one or more subjects is a fixation location or a nonfixation location;\n
selecting a first plurality of locations in the one or more images, wherein each location in the first plurality of locations is a fixation location;\n
selecting a second plurality of locations in the one or more images, wherein each location in the second plurality of locations is a nonfixation location;\n
setting sample weights applied to each location in the first and second plurality of locations to a common value;\n
providing a plurality of feature channels, wherein each feature channel detects a particular feature in the one or more images;\n
generating a plurality of relevant maps for each feature channel based on features detected by each feature channel, wherein a feature value at each location in a particular relevant map is based on features detected by a feature channel associated with the particular relevant map;\n
selecting a relevant map from among the plurality of relevant maps that minimizes weighted classification errors for the first and second plurality of locations, wherein a classification error occurs when a relevant map indicates that a particular location in the first plurality of locations is a nonfixation location or when a relevant map indicates that a particular location in the second plurality of locations is a fixation location, and wherein each classification error is multiplied with the sample weight associated with the particular location to form a weighted classification error;\n
generating a thresholded relevant map for the selected relevant map, wherein a value associated with a particular location in the thresholded relevant map is either a first value or a second value, and wherein the value associated with the particular location is based on the feature value at the particular location in the selected relevant map;\n
calculating a relevance weight applied to the selected relevant map based on the classification errors associated with the selected relevant map;\n
adjusting the sample weights applied to each location in the first and second plurality of locations based on the relevance weight and the classification errors associated with the selected relevant map;\n
iteratively performing the relevant map selecting, the relevance weight calculating, and the sample weight adjusting for a pre-defined number of times; and\n
generating the saliency model based on a sum of a product between each relevance weight and the thresholded relevant map corresponding to the selected relevant map corresponding to the relevance weight."],"number":20,"annotation":false,"claim":true,"title":false}]}},"filters":{"npl":[],"notNpl":[],"applicant":[],"notApplicant":[],"inventor":[],"notInventor":[],"owner":[],"notOwner":[],"tags":[],"dates":[],"types":[],"notTypes":[],"j":[],"notJ":[],"fj":[],"notFj":[],"classIpcr":[],"notClassIpcr":[],"classNat":[],"notClassNat":[],"classCpc":[],"notClassCpc":[],"so":[],"notSo":[],"sat":[]},"sequenceFilters":{"s":"SEQIDNO","d":"ASCENDING","p":0,"n":10,"sp":[],"si":[],"len":[],"t":[],"loc":[]}}