{"search_session":{},"preferences":{"l":"en","queryLanguage":"en"},"patentId":"089-327-696-257-30X","frontPageModel":{"patentViewModel":{"ref":{"entityRefId":"089-327-696-257-30X","entityRefType":"PATENT"},"entityMetadata":{"linkedIds":{"empty":true},"tags":[],"collections":[{"id":10771,"type":"PATENT","title":"University of Southern California - Patent Portfolio","description":"","access":"OPEN_ACCESS","displayAvatar":true,"attested":false,"itemCount":8387,"tags":[],"user":{"id":91044780,"username":"Cambialens","firstName":"","lastName":"","created":"2015-05-04T00:55:26.000Z","displayName":"Cambialens","preferences":"{\"usage\":\"public\",\"beta\":false}","accountType":"PERSONAL","isOauthOnly":false},"notes":[{"id":8241,"type":"COLLECTION","user":{"id":91044780,"username":"Cambialens","firstName":"","lastName":"","created":"2015-05-04T00:55:26.000Z","displayName":"Cambialens","preferences":"{\"usage\":\"public\",\"beta\":false}","accountType":"PERSONAL","isOauthOnly":false},"text":"
with w 1 . . . w d =1 . . . m. where no two elements are equal (the agent is not allowed to return to the same house). The robber's set of possible pure strategies (houses to rob) is denoted by Q and includes all integers j=1 . . . m. The payoffs (security agent, robber) for pure strategies i, j are: −v z,x ,v z,q , for j=lεi; and p z c x +(1−p z )(−v z,x ),−p z c q +(1−p z )(v z,q ), for j=zεi. With this structure it is possible to model many different types of robbers who have differing motivations; for example, one robber may have a lower cost of getting caught than another, or may value the goods in the various houses differently. To simulate differing types of robbers, a random distribution of varying size was added to the values in the base case. All games can be normalized so that, for each robber type, the minimum and maximum payoffs to the security agent and robber are 0 and 1, respectively. Experimental Results Three sets of experiments were performed by the present inventors and pertinent results are described herein with regard to FIGS. 3-6 . The first set of experiments compare the runtimes of the following four methods: a DOBSS method embodiment for finding the optimal solution, a ASAP embodiment that provides best policies with limited randomization, a multiple linear programs method that provides optimal policies and the MIP-Nash procedure for finding the best Bayes-Nash equilibrium. The multiple-LPs method and the MiP-Nash procedure require a normal-form game as input, and so the Harsanyi transformation is required as an initial step (the preprocessing time is not indicated here thus giving those other methods an apparent advantage). For this set of experiments, games were created in worlds of two to seven houses with patrols consisting of two houses, constructing payoff tables as described in the previous subsection. The runtime analysis experiments were divided into two sets: one set of graphs showing runtime results from two, three and fours houses for all the four methods mentioned above and the second set analyzing runtimes of DOBSS and ASAP for five to seven houses since the other two methods were found to be quite slow in these scenarios. The first set of runtime graphs in FIG. 3 shows a plot 300 of runtime results for all the four methods for two, three and four houses as shown in the three graphs. Each runtime value in the graph(s) corresponds to an average over twenty randomly generated scenarios. The r-axis shows the number of follower types the leader faces starting from 1 to 14 adversaries and the y-axis of the graph shows the runtime in seconds on log-scale ranging from 0.01 to 10000 seconds. The choice of 0.01 to 10000 is for convenience of representation of log scale (with base 10 ). All the experiments that were not concluded in 30 minutes (1800 seconds) were cut off. From the runtime graphs, it can be concluded that the DOBSS and ASAP methods outperform the multiple-LPs and MiP-Nash methods with respect to runtime. A maximum of fourteen adversaries were modeled for all our domains. For the domain with two houses, while the MIP-Nash and multiple-LPs method needed about 1000s for solving the problem with fourteen adversaries, both the DOBSS and ASAP provided solutions in less than 0.1s. Note that DOBSS provided the optimal solution while ASAP provided the best possible solution with randomization constraints. These randomization constraints were believed to also sometimes cause ASAP to incorrectly claim solutions to be infeasible, the details of which are presented later on in this section. The runtime for ASAP in all results was taken as either the time needed to generate an optimal solution or to determine that no feasible solution exists. The first graph in FIG. 3 shows the trends for all these four methods. While the runtimes of DOBSS and ASAP show linear increase in runtimes, the other two procedures show an exponential trend. The runtimes of DOBSS and ASAP are themselves exponential since they show a linear increase when plotted on a log-scale graph. Further they have an exponential speedup over the other two procedures as seen in the graph. The second graph in FIG. 3 presents results for the domain having three houses. Both the MIP-Nash and multiple-LPs could solve this problem only till seven adversaries within the cutoff time of 1800s whereas DOBSS and ASAP could solve the problem for all the fourteen adversaries modeled under 10s. (The cutoff of 1800s is also the reason MIP-Nash and multiple-LPs appear to have a constant run-time beyond seven adversaries.) Similar trends can be observed in the third graph with a domain of four houses where MIPNash and multiple-LPs could solve only until 5 adversaries whereas DOBSS and ASAP could solve till fourteen adversaries within 400s for DOBSS and 500s for ASAP. From this set of three graphs, it can be concluded that DOBSS and ASAP outperform the other two procedures, by an exponential margin as predicted in the proof presented, supra. Between the two fastest methods, DOBSS and ASAP, the DOBSS method runs faster than ASAP in general. To verify this trend, a speedup graph 400 is presented in FIG. 4 for larger problems, e.g., for domains having five, six and seven houses. The x-axis shows the number of adversaries the agent faces and the y-axis represents the speedup obtained by DOBSS over ASAP in percent, i.e., 100-runtime (ASAP−DOBSS)/DOBSS. For example, for the domain with 5 houses and 5 adversaries, the plot shows a speedup of about 95% while for 6 and 7 houses it shows speedups of about 70% and 55% respectively. This implies that if DOBSS needs 100s to solve the problem and has a speedup of 70%, ASAP would need 170s to solve the same problem. All these speedups were calculated as an average of the scenarios (out of the 20 modeled for each instance) that generated optimal solutions (or were declared infeasible for ASAP) within the cutoff time of 1800 s. Note that results are presented for only until 12, 9 and 8 adversaries for 5, 6 and 7 houses respectively since almost all the 20 instances cross the cutoff times. From the graph in FIG. 4 , it can be obtained/shown that DOBSS has a faster algorithm runtime than ASAP in all the cases since there is always a positive speedup. Further, it may be noticed that the speedups obtained were highest when the number of adversaries are between 2 to 5 and the speedups taper off thereafter. The main reason for this trend is as follows: as the number of adversaries increase, the percent of infeasible solutions generated by ASAP increases (as seen in the table of FIG. 5 , described, infra). While DOBSS spends most of its time searching for the optimal solution even if it finds a good solution early-on, ASAP just needs to determine feasibility of the problem whenever it outputs infeasible solutions, hence bringing down the averaged speedups as the number of infeasible solution instances increase—obviously, ASAP is mistakenly determining solutions to be infeasible. Calculating the average speedups over all the adversary scenarios for five, six and seven houses it was found that DOBSS has a 62% average speedup over the ASAP method, i.e., if DOBSS takes 100s, ASAP would need 162s on an average. This quantity becomes significant considering the following issues: (a) ASAP procedure generates infeasible solutions significant number of times while DOBSS is always feasible. (b) DOBSS provides the optimal solution whereas ASAP provides the best solution with limited randomization when-ever feasible. The next two sets of results focus on the two issues just presented. A second set of experimental results are introduced in the graph 550 of FIG. 5 to highlight the infeasibility issue. The infeasibility of ASAP is a new result and a significant one given that ASAP is the closest competitor of DOBBS in terms of efficiency—while previous work had just presented the ASAP method, large-scale experiments that have systematically uncovered the issue of infeasibility. in this experiment, the same settings as described above were used. The number of houses was varied between two to seven (columns in the table) and the number of adversary types was varied between one to fourteen (rows in the table). For each fixed number of houses and follower types, twenty scenarios were randomly generated. The ASAP procedure was run and presented the number of infeasible solutions obtained, as a percentage of all the scenarios tested for each of the fixed number of houses and adversary types. For example, with 8 adversaries (row numbered 8) and 4 houses (column numbered 4) scenario, ASAP generates 15% infeasible solutions. Note that for the values marked with a star the percentage presented in the table represents an upper bound on the number of infeasible scenarios. In these starred scenarios the ASAP procedure ran out of time in many instances. When ASAP ran out of time, it either indicated infeasibility, in which case it was classified as infeasible solution making it an upper hound (since there might be feasible solution when sufficient time is provided); or it indicated that there was a feasible solution even though it has not found the optimal yet, in which case it was obviously not marked as infeasible. The following conclusions about ASAP can be made from the table in FIG. 5 : (a) In general. given a fixed number of houses, as the number of adversaries increase (i.e., from 1 to 14) the percent-age of infeasible solutions increase (down the columns). (b) Given a fixed number of adversaries, as the number of houses increase, the percentage of infeasible solutions increase (across the rows). Although there are exceptions to both the conclusions, the general trend is that as the problem size increases (due to increase in either houses or adversaries or both) ASAP tends to generate more infeasible solutions thus making it unsuitable for bigger problems. From the table it can be seen that more than 12.5% of the solutions are infeasible for the live house problem when averaged over all the adversary scenarios. This number increases to as high as 18% and 20% on an average for the six and seven house problems. If similar calculations are performed over the last live adversary scenarios, i.e., when the number of adversaries are varied from 10 to 14, 16%, 29% and 25% can be obtained, respectively, for the five, six and seven house scenarios. This shows that the ASAP produces more infeasible solutions as the problem size increases. Further, there is no procedure to determine if ASAP will generate a infeasible solution until runtime, thus making the ASAP approach impractical. The third set of experiments compared the solution quality provided by all the four methods. Both DOBSS and Multiple-LPs procedure provide the optimal solution and hence are considered equivalent. In FIG. 6 , a table 600 is presented that shows the quality loss averaged over 20 instances, expressed as a percent loss from the optimal solution (provided by DOBSS), for the ASAP and the MIP-Nash procedures. The averaged results are then presented for all the houses and adversary scenarios as in FIG. 5 . The percent loss of quality is defined as 100*quality(DOBSS−x)/DOBSS, where x is the solution quality of ASAP or MIP-Nash. Each cell (corresponding to a fixed number of houses and columns) contains two numbers. The first number represents the percent of quality loss for ASAP, and the second represents the same for the MIP-Nash procedure. The ‘na’ in the table indicates that the algorithm was unable to provide any solution in the cutoff time of 1800s while ASAP generated a solution (feasible or infeasible) in all instances. The quality of infeasible solutions was taken as zero. As described earlier, for FIG. 6 the rows numbered from 1 to 14 represent the number of adversaries and columns numbered from 2 to 4 represent the number of houses. For example, for 3 houses and 6 adversaries scenario, the quality loss tuple shown in the table is <10.1, 0>. This means that ASAP has a quality loss of 10.1% while MIP-Nash has 0% quality loss. A quality loss of 10.1% would mean that if DOBSS provided a solution of quality 100 units, the solution quality of ASAP would be 89.9 units. From the table the following can be obtained: (a) The quality loss for ASAP is very low for two houses case and increases in general as the number of houses and adversaries increase. The average quality loss was 0.5% over all adversary scenarios for the two house case and increases to 9.1% and 13.3% respectively for three and four houses case. (b) The equilibrium solution provided by the MIP-Nash procedure is also the optimal leader strategy for 2 and 3 houses case; hence the quality loss of 0. The solution quality of the equilibrium is lower than the optimum solution for the four houses case by almost 8% when averaged over all the available data. From the three sets of experimental results it can be concluded that: DOBSS and ASAP are significantly faster than the other procedures with DOBSS being the fastest method. Further, DOBSS provides a feasible exact solution always while ASAP is a heuristic that may have lower solution quality and also generate infeasible solutions a significant number of times. Hence, in exemplary embodiments, DOBSS, an efficient and exact procedure for solving the Bayesian Stackelberg games, may be preferred. Accordingly, embodiments of the present disclose can provide new exact methods (called DOBBS) for finding the optimal strategy for the leader in a Bayesian Stackelberg game and can provide various advantages over prior art techniques. In such games, one agent (the leader) must commit to a possibly mixed strategy that can be observed by other agents (the followers) before they choose their own strategies. Such games, in which the leader is uncertain about the types of adversaries it may face, are extremely valuable in modeling domains involving security, including patrolling, setting up checkpoints, network routing, transportation systems and others; and thus solution techniques such as DOBBS for efficiently solving such games are crucial. DOBSS techniques can be orders of magnitude faster than the previously prior art exact methods. DOBSS techniques thus represent a significant advance in the state of the art in addressing security domains. One key advantage of the DOBSS approach is that it operates directly on the compact Bayesian representation, with-out requiring the Harsanyi transformation. in particular, DOBBS obtains a decomposition scheme by exploiting the property that the followers are independent of each other. Since the problem is NP-hard, it would not be anticipated that a simple decomposition would be practical. Instead, the key in the DOBBS decomposition scheme is the observation that evaluating the leader strategy against a Harsanyi-transformed game matrix is equivalent to evaluating against each of the game matrices for the individual follower types. This decomposition is analogous to ASAP techniques, which can also operate directly on the untransformed Bayesian game to find a high-quality strategy for the leader; how-ever it generates infeasible solutions as the problem sizes increase and does not guarantee an optimal solution due to controlled randomization. In addition, our experiments show that ASAP can be slower than DOBSS as the number of follower types increases. DOBBS's provides advantages over its other main competitor, i.e., the prior art multiple LPs method introduced to compute optimal leader strategies for non-Bayesian games. This prior art method faces an exponential explosion when applied to domains of interest in the present disclosure. Furthermore, it is unlikely to be decomposable into a small number of games given that the problem being attacked is NP-hard; DOBSS has the advantage of decomposition, but can work with mixed-integer linear programs (MILPs) rather than LPs. Finally, DOBSS requires solution of only one optimization problem, rather than a series of problems as in the LP method. Additionally, the sequence form (of the prior art) provides an alternative compact representation to normal form representation, and has been shown in games like Poker to provide significant speedups in finding equilibrium solutions over approaches based on normal form representations. Such a representation, however, cannot be directly used in Stackelberg games as described herein. In particular, game models described herein assume that an adversary knows not the specific strategy (e.g., patrolling plan) that an agent will follow, but rather only the agent's mixed strategy. Representing such a commitment to a mixed strategy in a sequence form representation is difficult; it would need to represent all possible mixed strategies in advance. Furthermore, the prior art has not focused on computing optimal response in Stackelberg games, but rather in only finding equilibria. While certain embodiments have been described herein, it will be understood by one skilled in the art that the methods, systems, and apparatus of the present disclosure may be embodied in other specific forms without departing from the spirit thereof. Accordingly, the embodiments described herein, and as claimed in the attached claims, are to be considered in all respects as illustrative of the present disclosure and not restrictive.","lang":"en","source":"USPTO_FULLTEXT","data_format":"ORIGINAL"}},"description_lang":["en"],"has_description":true,"has_docdb":true,"has_inpadoc":true,"has_full_text":true,"biblio_lang":"en"},"jurisdiction":"US","collections":[],"usersTags":[],"lensId":"089-327-696-257-30X","publicationKey":"US_2009_0099987_A1","displayKey":"US 2009/0099987 A1","docAssets":{"lensId":"089-327-696-257-30X","pdfUrl":"https://www.lens.org/images/patent/US/20090099987/A1/US_2009_0099987_A1.pdf","images":[{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000001.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000001.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000002.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000002.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000003.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000003.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000004.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000004.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000005.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000005.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000006.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000006.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000007.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000007.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000008.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000008.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000009.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000009.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000010.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000010.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000011.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000011.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000012.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000012.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000013.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000013.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000014.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000014.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000015.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000015.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000016.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000016.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000017.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000017.png"},{"thumb":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/10pc/00000018.png","full":"https://s3-us-west-2.amazonaws.com/lens-resource/patent/US/A1/20090099/20090099987/image/page/full/00000018.png"}],"fallover":false},"countryName":"USA","inventorModel":{"inventors":[{"name":{"value":"TAMBE MILIND","valueNormalised":"Tambe Milind"},"inventorship":null},{"name":{"value":"PARUCHURI PRAVEEN","valueNormalised":"Paruchuri Praveen"},"inventorship":null},{"name":{"value":"ORDONEZ FERNANDO","valueNormalised":"Ordonez Fernando"},"inventorship":null},{"name":{"value":"KRAUS SARIT","valueNormalised":"Kraus Sarit"},"inventorship":null},{"name":{"value":"PEARCE JONATHAN","valueNormalised":"Pearce Jonathan"},"inventorship":null},{"name":{"value":"MARECKI JANUSZ","valueNormalised":"Marecki Janusz"},"inventorship":null}],"inventorships":[],"unmatchedInventorships":[],"activeUserHasInventorship":false},"simpleFamilyId":223404656,"citesPatentCount":0,"countrySpec":{"countryName":"USA","description":"FIRST PUBLISHED PATENT APPLICATION [FROM 2001 ONWARDS]","rule":"pubdate:AFTER:15-03-2001","docType":"PATENT_APPLICATION"},"pageTitle":"US 2009/0099987 A1 - Decomposed Optimal Bayesian Stackelberg Solver","documentTitle":"Decomposed Optimal Bayesian Stackelberg Solver"},"claims":{"source":"xml_claims","claims":[{"lines":["A method of determining a leader's strategy for a security domain, the method comprising:\n
formulating a leader's policy;\n
formulating an optimization problem the follower solves to find its optimal response to the leader's policy;\n
maximizing the payoff of the leader, given that the follower uses an optimal response that is function of the leader's policy, and formulating the problem as mixed-integer quadratic problem (MIQP);\n
reformulating the leader payoff problem to a mixed-integer linear program (MILP);\n
solving the MILP; and\n
directing a patrolling strategy of an agent within the security domain based on the solved MILP."],"number":1,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 1, wherein the optimization problem the follower solves is of the form,\n∑j∈Q∑i∈ℵCijxiqj,\nwherein the leader is the row player and the follower the column player, the vector of strategies of the leader are denoted by x, q denotes the vector of strategies of the follower, X and Q denote the index sets of the leader and follower's pure strategies, respectively, and the payoff matrix C correspond to Cij that is the reward of the follower when the leader takes pure strategy i and the follower takes pure strategy j."],"number":2,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 2, wherein the optimization problem is subject to constraints, ΣjεQqj=1 and q≧0."],"number":3,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 1, wherein maximizing the payoff of the leader comprise solving a non-convex quadratic objective function."],"number":4,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 2, wherein the MIQP is of the form, max\n∑i∈ℵ∑j∈QRijq(x)xi,\nand wherein the payoff matrix R corresponds to Rij, the reward of the leader."],"number":5,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 5, wherein the MIQP is subject to constraints,\n∑iεℵxi=1\nand xlε{0, . . . , 1}."],"number":6,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 2, wherein the MIQP is of the form, max\n∑i∈ℵ∑j∈QRijxiqj,\nand wherein the payoff matrix R corresponds to Rij, the reward of the leader."],"number":7,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 7, wherein the MIQP is subject to constraints,\n∑ixi=k,\n∑iεQqj=1,\n0≤(a-∑iεℵCijxi)≤(1-qj)M,\nxi∈{0,1,…,1},and\nqi∈{0,1}."],"number":8,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 1, further comprising performing a decomposition for multiple adversaries."],"number":9,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 9, wherein performing a decomposition comprises constructing a decomposed MIQP of the form,\n∑i∈ℵ∑l∈L∑j∈QplRijlxiqjl."],"number":10,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 10, wherein the MIQP is subject to constraints,\n∑ixi=1,\n∑iεQqjl=1,\n0≤(al-∑iεℵCijlxi)≤(1-qjl)M,\nxi∈{0,1,…,1},\nqjl∈{0,1},and\na∈"],"number":11,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 9, wherein performing a decomposition comprises constructing a decomposed MILP of the form, maxq, z, a\n∑i∈ℵ∑l∈L∑j∈QplRijlzijj."],"number":12,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 12, wherein the MILP is subject to constraints,\n∑iεℵ∑jεQzijl=1,\n∑jεQzijl≤1,\nqjl≤∑iεℵzijl≤1,\n∑jεQqjl=1,\n0≤(al-∑iεℵCijlxi(∑hεQzihl))≤(1-qjl)M,\n∑iεQzijl=∑jεQzijl,\nzijl∈{0,…,1},\nqjl∈{0,1},and\na∈."],"number":13,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 1, wherein the agent comprises a robot or UAV."],"number":14,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 14, wherein the security domain comprises infrastructure to be protected."],"number":15,"annotation":false,"claim":true,"title":false},{"lines":["The method of claim 14, wherein the security domain comprises one or more airport terminals to be protected."],"number":16,"annotation":false,"claim":true,"title":false},{"lines":["A computer-executable program product comprising a computer-readable medium with resident computer-readable instructions, the computer readable instructions comprising:\n
formulating a leader's policy;\n
formulating an optimization problem the follower solves to find its optimal response to the leader's policy;\n
maximizing the payoff of the leader, given that the follower uses an optimal response that is function of the leader's policy, and formulating the problem as mixed-integer quadratic problem (MIQP);\n
reformulating the leader payoff problem to a mixed-integer linear program (MILP);\n
solving the MILP, wherein the leader's strategy for a security domain can be determined; and\n
directing a patrolling strategy of an agent within the security domain based on the solved MILP."],"number":17,"annotation":false,"claim":true,"title":false},{"lines":["The computer-executable program product of claim 17, wherein the computer-readable instructions comprise an optimization problem the follower solves of the form,\n∑j∈Q∑i∈ℵCijxiqj,\nwherein the leader is the row player and the follower the column player, the vector of strategies of the leader are denoted by x, q denotes the vector of strategies of the follower, X and Q denote the index sets of the leader and follower's pure strategies, respectively, and the payoff matrix C correspond to Cij that is the reward of the follower when the leader takes pure strategy i and the follower takes pure strategy j."],"number":18,"annotation":false,"claim":true,"title":false},{"lines":["The computer-executable program product of claim 18, wherein the computer-readable instructions including the optimization problem include constraints, ΣjεQqj=1 and q≧0."],"number":19,"annotation":false,"claim":true,"title":false},{"lines":["The computer-executable program product of claim 18, wherein maximizing the payoff of the leader comprise solving a non-convex quadratic objective function."],"number":20,"annotation":false,"claim":true,"title":false},{"lines":["The computer-executable program product of claim 18, wherein the MIQP is of the form, max\n∑i∈ℵ∑j∈QRijq(x)xi,\nand wherein the payoff matrix R corresponds to Rij, the reward of the leader."],"number":21,"annotation":false,"claim":true,"title":false},{"lines":["The computer-executable program product of claim 21, wherein the MIQP is subject to constraints,\n∑iεℵxi=1\nand xiε{0, . . . , 1}."],"number":22,"annotation":false,"claim":true,"title":false},{"lines":["The computer-executable program product of claim 18, wherein the MIQP is of the form, max\n∑i∈ℵ∑j∈QRijxiqj,\nand wherein the payoff matrix R corresponds to Rij, the reward of the leader."],"number":23,"annotation":false,"claim":true,"title":false},{"lines":["The computer-executable program product of claim 23, wherein the MIQP is subject to constraints,\n∑ixi=k,\n∑iεQqj=1,\n0≤(a-∑iεℵCijxi)≤(1-qj)M,\nxi∈{0,1,…,1},and\nqi∈{0,1}."],"number":24,"annotation":false,"claim":true,"title":false},{"lines":["The computer-executable program product of claim 17, further comprising instructions for performing a decomposition for multiple adversaries."],"number":25,"annotation":false,"claim":true,"title":false},{"lines":["The computer-executable program product of claim 25, wherein performing a decomposition comprises constructing a decomposed MIQP of the form,\n∑i∈ℵ∑l∈L∑j∈QplRijlxiqjl."],"number":26,"annotation":false,"claim":true,"title":false},{"lines":["The computer-executable program product of claim 26, wherein the MIQP is subject to constraints,\n∑ixi=1,\n∑jεQqjl=1,\n0≤(al-∑iεℵCijlxi)≤(1-qjl)M,\nxi∈{0,1,…,1},\nqjl∈{0,1},and\na∈."],"number":27,"annotation":false,"claim":true,"title":false},{"lines":["The computer-executable program product of claim 25, wherein performing a decomposition comprises constructing a decomposed MILP of the form,\n∑i∈ℵ∑l∈L∑j∈QplRijlzijl."],"number":28,"annotation":false,"claim":true,"title":false},{"lines":["The computer-executable program product of claim 28, wherein the MILP is subject to constraints,\n∑iεℵ∑jεQzijl=1,\n∑jεQzijl≤1,\nqjl≤∑iεℵzijl≤1,\n∑jεQqjl=1,\n0≤(al-∑iεℵCijlxi(∑iεQzihl))≤(1-qjl)M,\n∑iεQzijl=∑iεQzijl,\nzijl∈{0,…,1},\nqjl∈{0,1},and\na∈."],"number":29,"annotation":false,"claim":true,"title":false},{"lines":["The computer-executable program product of claim 17, further comprising instructions for directing the patrolling of robot of a UAV based on the solved MILP."],"number":30,"annotation":false,"claim":true,"title":false},{"lines":["The computer-executable program product of claim 30, wherein the patrolling domain comprises infrastructure to be protected."],"number":31,"annotation":false,"claim":true,"title":false},{"lines":["The computer-executable program product of claim 30, wherein the patrolling domain comprises one or more airport terminals to be protected."],"number":32,"annotation":false,"claim":true,"title":false},{"lines":["A computer system comprising:\n
a processor configured and arranged to determine a leader's strategy for a security domain;\n
fix the policy of a leader to a policy;\n
formulate an optimization problem the follower solves to find its optimal response to the leader's policy;\n
maximize the payoff of the leader, given that the follower uses an optimal response that is function of the leader's policy, and formulating the problem as mixed-integer quadratic problem (MIQP);\n
reformulate the leader payoff problem to a mixed-integer linear program (MILP);\n
solve the MILP; and\n
direct a patrolling strategy of an agent within the security domain based on the solved MILP."],"number":33,"annotation":false,"claim":true,"title":false},{"lines":["The system of claim 33, wherein the processor is configured and arranged to direct a patrolling route of a robot or UAV in the security domain."],"number":34,"annotation":false,"claim":true,"title":false}]}},"filters":{"npl":[],"notNpl":[],"applicant":[],"notApplicant":[],"inventor":[],"notInventor":[],"owner":[],"notOwner":[],"tags":[],"dates":[],"types":[],"notTypes":[],"j":[],"notJ":[],"fj":[],"notFj":[],"classIpcr":[],"notClassIpcr":[],"classNat":[],"notClassNat":[],"classCpc":[],"notClassCpc":[],"so":[],"notSo":[],"sat":[]},"sequenceFilters":{"s":"SEQIDNO","d":"ASCENDING","p":0,"n":10,"sp":[],"si":[],"len":[],"t":[],"loc":[]}}