# HG changeset patch # User shellac # Date 1616436770 0 # Node ID 4f3585e2f14b5eccc4962093494006a337d39f56 "planemo upload commit 60cee0fc7c0cda8592644e1aad72851dec82c959" diff -r 000000000000 -r 4f3585e2f14b README.md --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/README.md Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,3 @@ +# Sam consensus V3 + +Sam consensus V3 galaxy wrapper \ No newline at end of file diff -r 000000000000 -r 4f3585e2f14b call_consensus_from_sam_3.pl --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/call_consensus_from_sam_3.pl Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,517 @@ +#!/usr/bin/perl +# This calls a consensus based on the sam file and does not use qual scores etc it simply counts up each event and calls a simple most observations consensus +# the sam file must be sorted in order to cope with indels. +# usgae is ./call_consensus_from_sam_3.pl aligned.sam genome.fasta 10 +# the number 10 is the minimum mapq you want to consider + + + +open (INFILEFASTA, "$ARGV[1]"); # opens files + +open (INFILESAM, "$ARGV[0]"); +$full_len_cut = 1000; +$min_mapq = $ARGV[2]; +$full_len_count = 0; +open (OUT, ">$ARGV[0].Observed_differences_list.txt"); +open (OUTINDELS, ">$ARGV[0].Indels_applied_list.txt"); +open (OUTMINOR, ">$ARGV[0].minor_variants.txt"); +print OUTMINOR "Genome\tPosition\tA\tC\tG\tT\tDepth\n"; +open (OUTMINORB, ">$ARGV[0].dominant_minor_variant.txt"); +print OUTMINORB "Genome\tPosition\tA\tC\tG\tT\tDominat nucleotide\tFrequency\tTotal Depth\n"; +print OUT "Genome\tPosition\tReference nucleotide\tGATC Depth count\tG count\tA count\tT count\tC count\tConsensus (. means no change from reference)\tDeletions\tTotal insertions recorded\tThis insertion most popular\tThis many recorded most popular insertions\n"; +open (OUTNEWG, ">$ARGV[0].corrected_genome_snps_only.txt"); +open (OUTNEWGINDEL, ">$ARGV[0].corrected_genome_snps_and_indels.txt"); +open (OUTINDELKEY, ">$ARGV[0].key_indels.txt"); +open (OUTINDELKEYSIG, ">$ARGV[0].Significant_key_indels.txt"); +open (OUTFULLSAM, ">$ARGV[0].full_len_sam.sam"); +$start_time = time; + +%for_translation = (TTT=>"F", TTC=>"F", TCT=>"S", TCC=>"S", TAT=>"Y", TAC=>"Y", TGT=>"C", TGC=>"C", TTA=>"L", TCA=>"S", TAA=>"*", TGA=>"*", TTG=>"L", TCG=>"S", TAG=>"*", TGG=>"W", CTT=>"L", CTC=>"L", CCT=>"P", CCC=>"P", CAT=>"H", CAC=>"H", CGT=>"R", CGC=>"R", CTA=>"L", CTG=>"L", CCA=>"P", CCG=>"P", CAA=>"Q", CAG=>"Q", CGA=>"R", CGG=>"R", ATT=>"I", ATC=>"I", ACT=>"T", ACC=>"T", AAT=>"N", AAC=>"N", AGT=>"S", AGC=>"S", ATA=>"I", ACA=>"T", AAA=>"K", AGA=>"R", ATG=>"M", ACG=>"T", AAG=>"K", AGG=>"R", GTT=>"V", GTC=>"V", GCT=>"A", GCC=>"A", GAT=>"D", GAC=>"D", GGT=>"G", GGC=>"G", GTA=>"V", GTG=>"V", GCA=>"A", GCG=>"A", GAA=>"E", GAG=>"E", GGA=>"G", GGG=>"G"); +%rev_translation = (GGC=>"A", ACT=>"S", TCA=>"*", ACA=>"C", TCG=>"R", GAT=>"I", GTT=>"N", GCT=>"S", GTA=>"Y", TGT=>"T", CGA=>"S", CGG=>"P", CAG=>"L", TGC=>"A", CAC=>"V", CTT=>"K", AAC=>"V", GTG=>"H", TCT=>"R", GGT=>"T", TGG=>"P", CCA=>"W", GAG=>"L", GCG=>"R", CAA=>"L", TTA=>"*", CTG=>"Q", CGT=>"T", CAT=>"M", TTT=>"K", TAC=>"V", CTA=>"*", AAG=>"L", TCC=>"G", GAC=>"V", GCA=>"C", TGA=>"S", AAT=>"I", ATA=>"Y", ATT=>"N", AGT=>"T", TTG=>"Q", GTC=>"D", ACC=>"G", GGA=>"S", AAA=>"F", CCT=>"R", ACG=>"R", CCG=>"R", ATG=>"H", TAT=>"I", GGG=>"P", CCC=>"G", TAA=>"L", CTC=>"E", TAG=>"L", ATC=>"D", AGA=>"S", GAA=>"F", CGC=>"A", GCC=>"G", AGC=>"A", TTC=>"E", AGG=>"P"); +%base_pair = (G=>"A", A=>"T", T=>"A", C=>"G"); + +print OUTINDELS "Chromosome\tType\tLocation\tIndel_count\tIndel_size\tRead_depth\tIndel size\tIndel proportion\n"; + + +%transcripts = (); +%orf_hash =(); +%peptides_pep_score = (); +%unique_fastas = (); +%peptides_and_utn = (); +%utn_and_peptides = (); +%fasta_head_and_peptides = (); +%indel_tab = (); + +$fasta_out =""; +$fasta_header = "Not_started_yet"; +while ($fasta_line = ) +{ + chomp $fasta_line; + print "\n$fasta_line\n"; + if (substr($fasta_line,0,1) eq ">") { + + if ($fasta_header ne "Not_started_yet") { + #print "\n\nFasta header $fasta_header\n\n"; + @temp = split (/\s+/, $fasta_header); + $fasta_head_no_sp = $temp[0]; + #print "\n\nAfter split $fasta_head_no_sp\n\n"; + substr($fasta_head_no_sp,0,1) = ""; + $transcripts {$fasta_head_no_sp} = $fasta_out; + + #print "\n\n$fasta_head_no_sp\n\n"; + } + + + $fasta_header = $fasta_line; + + + $fasta_out =""; + } + if (substr($fasta_line,0,1) ne ">") {$fasta_out = $fasta_out.$fasta_line;} +} +($fasta_head_no_sp, $restofit) = split (/ /, $fasta_header); +substr($fasta_head_no_sp,0,1,""); +$transcripts {$fasta_head_no_sp} = $fasta_out; + +#print "\n$fasta_header\n\n$fasta_head_no_sp\n"; + +$faster_header = ""; +$chrom_proc = 0; +$chromosome = "notstartedyet"; +$old_chromosome = ""; +#exit; + + +$sam_count = 0; + +$sam_header= ""; +$transcript_count = 0; + +open (INFILESAM, "$ARGV[0]"); + +while ($sam_line = ) +{ + if (substr($sam_line,0,1) eq "@") {next;} + #if (($sam_count % 10000) == 0) {print "$sam_count entries processed\n";} + #print "$sam_count\n"; + $sam_count ++; + chomp $sam_line; + @sam_cells = split (/\t/, $sam_line); + if ($sam_cells[2] eq "*") {next;} + if (($sam_cells[2] ne $chromosome) and ($chromosome ne "notstartedyet")) { + &process_data; + $chromosome = $sam_cells[2]; + $genome = $transcripts{$chromosome}; + $len_gen = length $genome; + delete $transcripts{$chromosome}; + $chrom_proc ++; + print "hi $chromosome is being processed this is chromosome number $chrom_proc\n"; + %g = (); + %a = (); + %t = (); + %c = (); + %ins = (); + %del = (); + %depth = (); + + } + # Data processed + if ($chromosome eq "notstartedyet") { + + $chromosome = $sam_cells[2]; + $genome = $transcripts{$chromosome}; + $len_gen = length $genome; + delete $transcripts{$chromosome}; + $chrom_proc ++; + print "hi $chromosome is being processed this is chromosome number $chrom_proc\n"; + %g = (); + %a = (); + %t = (); + %c = (); + %ins = (); + %del = (); + %depth = (); + + + } + + + + $utn = $sam_cells[0]; + $mapq = $sam_cells[4]; + $sequence = uc $sam_cells[9]; + $seq_len = length $sequence; + $sequence =~ tr/Uu/TT/; + #print "$sam_cells[5]\n"; + if ($min_mapq > $mapq) {next;} + #print "OK"; + $flag = $sam_cells [1]; + $genome_position = $sam_cells[3] - 1; + $fl_pos = $genome_position; + if ($genome_position < $full_len_cut) {$full_len_flag = "S"; + #if (length $sequence > 28000) {print "possible full len\n";} + + } else {$full_len_flag = "I";} + $sam_position = 0; + @cigar = split(/(M|I|D|N|S|H|P|X|=)/, $sam_cells[5]); + $array_cigar = 1; + $temp_len = 0; + while (length($cigar[$array_cigar]) >=1){ + $cigar_value = $cigar[$array_cigar - 1]; + if (($cigar[$array_cigar] eq "M") or ($cigar[$array_cigar] eq "I") or ($cigar[$array_cigar] eq "S")){$temp_len = $temp_len + $cigar_value;} + $array_cigar = $array_cigar + 2; + } + $tran_len = length $sequence; + if ($tran_len ne $temp_len) {print "cigar fail\n";next;} + $array_cigar = 1; + while (length($cigar[$array_cigar]) >=1){ + #print "cigar entry is $cigar[$array_cigar]\n"; + $cigar_value = $cigar[$array_cigar - 1]; + if ($cigar[$array_cigar] =~ /[H]/){ + #nothing to do + } + if (($cigar[$array_cigar] =~ /[S]/) and (($array_cigar + 1) < scalar (@cigar))) + { + $soft_clip = $cigar_value; + $sam_position = $sam_position + $cigar_value; + $seq_len = $seq_len - $cigar_value; + } + + + if ($cigar[$array_cigar] =~ /[M]/){ + $temp_count = 1; + #print "M"; + while ($temp_count <= $cigar_value) + { + $depth{$genome_position} ++; + $alt = substr($sequence, $sam_position, 1); + #print "alternative $alt\n"; + if ($alt eq "G") {$g{$genome_position} ++;} + if ($alt eq "A") {$a{$genome_position} ++;} + if ($alt eq "T") {$t{$genome_position} ++;} + if ($alt eq "C") {$c{$genome_position} ++;} + $temp_count ++; + $genome_position ++; + $sam_position ++; + $fl_pos ++; + } + + } + + #if (($cigar[$array_cigar] =~ /[D]/) and ($cigar_value >4)) {$cigar[$array_cigar] = "N";} + + if ($cigar[$array_cigar] =~ /[D]/){ + $temp_cv = $cigar_value - 1; + $tmp_name = "$chromosome\tDeletion\t$genome_position\t$cigar_value\t "; + if (exists $indel_tab{$tmp_name}){$indel_tab{$tmp_name} ++;} else {$indel_tab{$tmp_name} = 1;} + while ($temp_cv >= 0){ + if (exists $del{$genome_position + $temp_cv}) {$del{$genome_position + $temp_cv} ++;} else {$del{$genome_position + $temp_cv} = 1;} + $temp_cv --; + } + $genome_position = $genome_position + $cigar_value; + $fl_pos = $fl_pos + $cigar_value; + } + + if ($cigar[$array_cigar] =~ /[I]/){ + + $insertion = substr ($sequence, $sam_position, $cigar_value); + $tmp_name = "$chromosome\tInsertion\t$genome_position\t$cigar_value\t$insertion"; + if (exists $indel_tab{$tmp_name}){$indel_tab{$tmp_name} ++;} else {$indel_tab{$tmp_name} = 1;} + if (exists $ins{$genome_position}) {$ins{$genome_position} = $ins{$genome_position}."\t$insertion";} else {$ins{$genome_position} = $insertion;} + + $sam_position = $sam_position + $cigar_value; + + } + + if ($cigar[$array_cigar] =~ /[N]/){ + + $genome_position = $genome_position + $cigar_value; + + } + $array_cigar = $array_cigar +2; + + } + #if ($fl_pos > ($len_gen - 30)) {print "Stops\n";} + #if (($fl_pos > ($len_gen - $full_len_cut)) and ($full_len_flag eq "S")) {print OUTFULLSAM "$sam_line\n"; print "Full length found\n"; $full_len_count ++;} + if ($seq_len > ($len_gen - $full_len_cut)) {print OUTFULLSAM "$sam_line\n"; print "Full length found $seq_len X $len_gen\n"; $full_len_count ++;} + +} + + + + +#all done just need to process the last chromosome... +&process_data; + +foreach $keys (keys %transcripts){ + $genome = $transcripts{$keys}; + print OUTNEWGINDEL ">No_mapped_reads_to_".$keys."_genome_so_no_corrections\n$genome\n"; + print OUTNEWG ">No_mapped_reads_to_".$keys."_genome_so_no_corrections\n$genome\n"; +} + + + +$time_elapsed = time - $start_time; + +print "Processed $sam_count entries for $chrom_proc chromosomes in $time_elapsed seconds\nFull length entries is $full_len_count\n"; + +exit; + + +sub process_data { #now to process all the data on this chromosome before moving on to the next one + $genome_position = 0; + $len_genome = length($genome); + $old_genome = $genome; + open (TEMP, ">temp.txt"); + + + while ($genome_position <= $len_genome){ + #print "at $genome_position\n"; + %ins_hash = (); + %del_hash = (); + $ref_nucleotide = substr ($genome, $genome_position, 1); + $sam_con = $ref_nucleotide; + $temp_pos = $genome_position + 1; + + if ($depth{$genome_position} > 0) { + $A = $a{$genome_position}/$depth{$genome_position}; + $C = $c{$genome_position}/$depth{$genome_position}; + $G = $g{$genome_position}/$depth{$genome_position}; + $T = $t{$genome_position}/$depth{$genome_position}; + $test = -1; + $consensus = "N"; + if ($A > $test){$test = $A; $consensus = "A";} + if ($C > $test){$test = $C; $consensus = "C";} + if ($G > $test){$test = $G; $consensus = "G";} + if ($T > $test){$test = $T; $consensus = "T";} + if ($consensus eq "A") {$A = $A * -1; $test = $A; print OUTMINOR "$chromosome\t$temp_pos\t$A\t$C\t$G\t$T\t$depth{$genome_position}\n";} + if ($consensus eq "C") {$C = $C * -1; $test = $C; print OUTMINOR "$chromosome\t$temp_pos\t$A\t$C\t$G\t$T\t$depth{$genome_position}\n";} + if ($consensus eq "G") {$G = $G * -1; $test = $G; print OUTMINOR "$chromosome\t$temp_pos\t$A\t$C\t$G\t$T\t$depth{$genome_position}\n";} + if ($consensus eq "T") {$T = $T * -1; $test = $T; print OUTMINOR "$chromosome\t$temp_pos\t$A\t$C\t$G\t$T\t$depth{$genome_position}\n";} + if ($consensus eq "N") {print OUTMINOR "$chromosome\t$temp_pos\t0\t0\t0\t0\n";} + $major = $test * -1; + if ($A > $test){$test = $A; $secconsensus = "A";} + if ($C > $test){$test = $C; $secconsensus = "C";} + if ($G > $test){$test = $G; $secconsensus = "G";} + if ($T > $test){$test = $T; $secconsensus = "T";} + + if ($secconsensus eq "A") {print OUTMINORB "$chromosome\t$temp_pos\t$A\t0\t0\t0\t$consensus\t$major\t$depth{$genome_position}\n";} + if ($secconsensus eq "C") {print OUTMINORB "$chromosome\t$temp_pos\t0\t$C\t0\t0\t$consensus\t$major\t$depth{$genome_position}\n";} + if ($secconsensus eq "G") {print OUTMINORB "$chromosome\t$temp_pos\t0\t0\t$G\t0\t$consensus\t$major\t$depth{$genome_position}\n";} + if ($secconsensus eq "T") {print OUTMINORB "$chromosome\t$temp_pos\t0\t0\t0\t$T\t$consensus\t$major\t$depth{$genome_position}\n";} + if ($secconsensus eq "N") {print OUTMINORB "$chromosome\t$temp_pos\t0\t0\t0\t0\t$consensus\t$major\t$depth{$genome_position}\n";} + + } else {print OUTMINOR "$chromosome\t$temp_pos\t0\t0\t0\t0\n"; print OUTMINORB "$chromosome\t$temp_pos\t0\t0\t0\t0\n";} + + if ($depth{$genome_position} < 3) { + if ($len_genome <100000){ + $sam_con = "."; + print OUT "$chromosome\t$temp_pos\t$ref_nucleotide\t$depth{$genome_position}\t$g{$genome_position}\t$a{$genome_position} \t$t{$genome_position} \t$c{$genome_position}\t$sam_con\t$del{$genome_position}\t$ins_count\t$temp_most_ins\t$temp_most\n"; + } + $genome_position ++; + next; + } + + $temp_most = 0; + $temp_most_ins = ""; + $ins_count = 0; + if (exists $ins{$genome_position}){ + $temp_ins = $ins{$genome_position}; + @insertions = split (/\t/, $temp_ins); + $ins_count = scalar @insertions; + foreach $key (@insertions) + { + $ins_hash{$key} ++; + # print "found $key\n"; + } + $temp_most = 0; + $temp_most_ins = ""; + foreach $key (keys %ins_hash){ + # print "$key $ins_hash{$key}\n"; + if ($ins_hash{$key} > $temp_most) {$temp_most = $ins_hash{$key}; $temp_most_ins = $key;} + } + + + + #if ($ins_count > $depth{$genome_position}) {print OUT "POSSIBLE $ins_count INSERTION AT $genome_position\t $ins{$genome_position}\n";} + if ($temp_most > ($depth{$genome_position}) ) { + print OUT "Chromosome $chromosome TOTAL $ins_count INSERTION with $depth{$genome_position} no insertions AT $temp_pos most abundant is $temp_most_ins with $temp_most instertions\t $ins{$genome_position}\n"; + + print TEMP "Chromosome $chromosome TOTAL $ins_count INSERTION with $depth{$genome_position} no insertions AT $temp_pos most abundant is $temp_most_ins with $temp_most instertions\t$temp_most_ins\t$genome_position\t$ins{$genome_position}\n"; + } + if ($temp_most > (($depth{$genome_position})/10)) { + $indel_proportion = $temp_most/$depth{$genome_position}; + #print OUTINDELS "Chromosome\tType\tLocation\tIndel_count\tIndel_size\tRead_depth\tIndel size\tIndel proportion\n"; + print OUTINDELS "$chromosome\tInsertion\t$genome_position\t$ins_count\t$temp_most_ins\t$depth{$genome_position}\t$temp_most_ins\t$indel_proportion\n"; + } + + + } + + if (exists $del{$genome_position}){ + + if ($del{$genome_position} > $depth{$genome_position}) { + print OUT "Chromosome $chromosome POSSIBLE $del{$genome_position} deletion against $depth{$genome_position} no deletions found AT $temp_pos\n"; + #print OUTINDELS "Chromosome $chromosome POSSIBLE $del{$genome_position} deletion against $depth{$genome_position} no deletions found AT $temp_pos\n"; + print TEMP "Chromosome $chromosome POSSIBLE $del{$genome_position} deletion against $depth{$genome_position} no deletions found AT $temp_pos most frequent is $temp_most_dels deletion with $del{$genome_position}\t1\t$genome_position\t$del{$genome_position}\n"; + } + #if ($del_count > 10) {print OUT "POSSIBLE $del_count deletion AT $genome_position\t $del{$genome_position}\n";} + + if ($del{$genome_position} > (($depth{$genome_position})/10)) { + $indel_proportion = $del{$genome_position}/$depth{$genome_position}; + #print OUTINDELS "Chromosome\tType\tLocation\tIndel_count\tIndel_size\tRead_depth\tIndel size\tIndel proportion\n"; + print OUTINDELS "$chromosome\tDeletion\t$genome_position\t$del{$genome_position}\t$temp_most_dels\t$depth{$genome_position}\t$temp_most_dels\t$indel_proportion\n"; + } + + + } + + + + + + + $g_flag = 0; + $a_flag = 0; + $t_flag = 0; + $c_flag = 0; + $amb = "N"; + $top_nuc = 0; + $top_nucleotide = ""; + $sec_nuc = 0; + $second_nucleotide = ""; + $ref_nuc = 0; + if ($ref_nucleotide eq "G") {$ref_nuc = $g{$genome_position};} + if ($ref_nucleotide eq "A") {$ref_nuc = $a{$genome_position};} + if ($ref_nucleotide eq "T") {$ref_nuc = $t{$genome_position};} + if ($ref_nucleotide eq "C") {$ref_nuc = $c{$genome_position};} + if ($g{$genome_position} >= $top_nuc){ + $sam_con = "G"; + #$g_flag = 1; + $second_nucleotide = $top_nucleotide; + $sec_nuc = $top_nuc; + $top_nuc = $g{$genome_position}; + $top_nucleotide = "G"; + #if ($ref_nucleotide eq "G") {$amb ="G";} + } + if ($a{$genome_position} >= $top_nuc){ + $sam_con = "A"; + #$a_flag = 1; + $second_nucleotide = $top_nucleotide; + $sec_nuc = $top_nuc; + $top_nuc = $a{$genome_position}; + $top_nucleotide = "A"; + #if ($ref_nucleotide eq "A") {$amb ="A";} + } + if ($t{$genome_position} >= $top_nuc){ + $sam_con = "T"; + #$t_flag = 1; + $second_nucleotide = $top_nucleotide; + $sec_nuc = $top_nuc; + $top_nuc = $t{$genome_position}; + $top_nucleotide = "T"; + # if ($ref_nucleotide eq "T") {$amb ="T";} + } + if ($c{$genome_position} >= $top_nuc){ + $sam_con = "C"; + #$c_flag = 1; + $second_nucleotide = $top_nucleotide; + $sec_nuc = $top_nuc; + $top_nuc = $c{$genome_position}; + $top_nucleotide = "C"; + # if ($ref_nucleotide eq "C") {$amb ="C";} + } + + #print "This is G's recoded at this location $g{$genome_position}\n"; + if (($g{$genome_position} >= $a{$genome_position}) and ($g{$genome_position} >= $t{$genome_position}) and ($g{$genome_position} >= $c{$genome_position}) and ($g{$genome_position} >= 1)) { + $sam_con = "G"; $g_flag = 1; + if ($ref_nucleotide eq "G") {$amb ="G";} + } + if (($a{$genome_position} >= $g{$genome_position}) and ($a{$genome_position} >= $t{$genome_position}) and ($a{$genome_position} >= $c{$genome_position}) and ($a{$genome_position} >= 1)) { + $sam_con = "A"; $a_flag = 1; + if ($ref_nucleotide eq "A") {$amb ="A";} + } + if (($t{$genome_position} >= $g{$genome_position}) and ($t{$genome_position} >= $a{$genome_position}) and ($t{$genome_position} >= $c{$genome_position}) and ($t{$genome_position} >= 1)) { + $sam_con = "T"; $t_flag = 1; + if ($ref_nucleotide eq "T") {$amb ="T";} + } + if (($c{$genome_position} >= $g{$genome_position}) and ($c{$genome_position} >= $a{$genome_position}) and ($c{$genome_position} >= $t{$genome_position}) and ($c{$genome_position} >= 1)) { + $sam_con = "C"; $c_flag = 1; + if ($ref_nucleotide eq "C") {$amb ="C";} + } + + if (($g_flag + $a_flag + $t_flag + $c_flag) > 1) { + print OUT "ambiguity chromosome $chromosome at $temp_pos $g_flag G $a_flag A $t_flag T $c_flag C counts are G $g{$genome_position} A $a{$genome_position} T $t{$genome_position} C $c{$genome_position} \n"; + if ($amb ne "N") {$sam_con = $amb;} + } + + if ($sam_con ne $ref_nucleotide) { + #print "change\n"; + print OUT "$chromosome\t$temp_pos\t$ref_nucleotide\t$depth{$genome_position}\t$g{$genome_position}\t$a{$genome_position} \t$t{$genome_position} \t$c{$genome_position}\t$sam_con\t$del{$genome_position}\t$ins_count\t$temp_most_ins\t$temp_most\n"; + } else { + if (($len_genome <100000) or ($temp_most > $depth{$genome_position}) or ($del{$genome_position} > $depth{$genome_position})){ + print OUT "$chromosome\t$temp_pos\t$ref_nucleotide\t$depth{$genome_position}\t$g{$genome_position}\t$a{$genome_position} \t$t{$genome_position} \t$c{$genome_position}\t.\t$del{$genome_position}\t$ins_count\t$temp_most_ins\t$temp_most\n"; + } + } + + + substr ($genome, $genome_position, 1, $sam_con); + + $genome_position ++; + #print "at second $genome_position\n"; + + } + + + close TEMP; + $new_genome_w_indels = $genome; + open (TEMP, "temp.txt"); + @indels = reverse; + foreach $line (@indels) + { + chomp $line; + @indels_cells = split(/\t/, $line); + $change = $indels_cells[1]; + $type = $indels_cells[0]; + $genome_position = $indels_cells[2]; + if (index ($type, " INSERTION ") > 0) { + substr ($new_genome_w_indels, $genome_position, 0, $change); + } + if (index ($type, " deletion ") > 0) { + substr ($new_genome_w_indels, $genome_position, 1, ""); + } + + } + + + + print OUTNEWGINDEL ">New_".$chromosome."_genome_with_indels_is\n$new_genome_w_indels\n"; + print OUTNEWG ">New_".$chromosome."_genome_is\n$genome\n"; + open (CTSO, ">$ARGV[0].CTSO.txt"); + $warning = ""; + #open (OUTINDELKEYSIG, ">$ARGV[0].Significant_key_indels.txt"); + # open (OUTINDELKEY, ">$ARGV[0].key_indels.txt"); + # $tmp_name = "$chromosome\tDeletion\t$genome_position\t$temp_cv\t$insertion"; temp_cv is the size of the indel + print OUTINDELKEY "Chromosome\tInsertion or deletion\tLocation\tIndel size\tInsertion seq\tObservation count\tDepth at this lcation\tProportion of observation vs depth\n"; + print OUTINDELKEYSIG "Chromosome\tInsertion or deletion\tLocation\tIndel size\tInsertion seq\tObservation count\tDepth at this lcation\tProportion of observation vs depth\n"; + foreach $key (keys %indel_tab){ + $value = $indel_tab{$key}; + @array = split(/\t/, $key); + $genome_position = $array[2]; $indel_len = $array[3]; + if ($depth{$genome_position} > 0) {$a = ($value / $depth{$genome_position}); $tmp_proportion = sprintf ("%.2f",$a);} else {$tmp_proportion = "zero depth here";} + + print OUTINDELKEY "$key\t$value\t$depth{$genome_position}\t$tmp_proportion\n"; + if ($tmp_proportion > 0.1) { + + if ($indel_len > 6) {print "\n\nCtSo\nInsertion or deletion\t$array[1]\nLocation\t$genome_position\nIndel size\t$indel_len\nObservation count\t$value\nDepth at this location\t$depth{$genome_position}\nProportion of observation vs depth\t$tmp_proportion\n\n"; + print CTSO "\n\nCtSo\nInsertion or deletion\t$array[1]\nLocation\t$genome_position\nIndel size\t$indel_len\nObservation count\t$value\nDepth at this location\t$depth{$genome_position}\nProportion of observation vs depth\t$tmp_proportion\n\n"; + print OUTINDELKEYSIG "$key\t$value\t$depth{$genome_position}\t$tmp_proportion\n"; + if (($value > 9) or ($depth{$genome_position} > 9)) {$warning = $warning."$key\t$value\t$depth{$genome_position}\t$tmp_proportion\n";} + } + #print OUTINDELKEYSIG "$key\t$value\t$depth{$genome_position}\t$tmp_proportion\n"; + } + +} +if ($warning ne "") {print OUTINDELKEYSIG "\n\nEspecially take a moment to look at these in the above list...CtSo!\n\n$warning\n";} +} + diff -r 000000000000 -r 4f3585e2f14b env/bin/Activate.ps1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/Activate.ps1 Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,241 @@ +<# +.Synopsis +Activate a Python virtual environment for the current PowerShell session. + +.Description +Pushes the python executable for a virtual environment to the front of the +$Env:PATH environment variable and sets the prompt to signify that you are +in a Python virtual environment. Makes use of the command line switches as +well as the `pyvenv.cfg` file values present in the virtual environment. + +.Parameter VenvDir +Path to the directory that contains the virtual environment to activate. The +default value for this is the parent of the directory that the Activate.ps1 +script is located within. + +.Parameter Prompt +The prompt prefix to display when this virtual environment is activated. By +default, this prompt is the name of the virtual environment folder (VenvDir) +surrounded by parentheses and followed by a single space (ie. '(.venv) '). + +.Example +Activate.ps1 +Activates the Python virtual environment that contains the Activate.ps1 script. + +.Example +Activate.ps1 -Verbose +Activates the Python virtual environment that contains the Activate.ps1 script, +and shows extra information about the activation as it executes. + +.Example +Activate.ps1 -VenvDir C:\Users\MyUser\Common\.venv +Activates the Python virtual environment located in the specified location. + +.Example +Activate.ps1 -Prompt "MyPython" +Activates the Python virtual environment that contains the Activate.ps1 script, +and prefixes the current prompt with the specified string (surrounded in +parentheses) while the virtual environment is active. + +.Notes +On Windows, it may be required to enable this Activate.ps1 script by setting the +execution policy for the user. You can do this by issuing the following PowerShell +command: + +PS C:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser + +For more information on Execution Policies: +https://go.microsoft.com/fwlink/?LinkID=135170 + +#> +Param( + [Parameter(Mandatory = $false)] + [String] + $VenvDir, + [Parameter(Mandatory = $false)] + [String] + $Prompt +) + +<# Function declarations --------------------------------------------------- #> + +<# +.Synopsis +Remove all shell session elements added by the Activate script, including the +addition of the virtual environment's Python executable from the beginning of +the PATH variable. + +.Parameter NonDestructive +If present, do not remove this function from the global namespace for the +session. + +#> +function global:deactivate ([switch]$NonDestructive) { + # Revert to original values + + # The prior prompt: + if (Test-Path -Path Function:_OLD_VIRTUAL_PROMPT) { + Copy-Item -Path Function:_OLD_VIRTUAL_PROMPT -Destination Function:prompt + Remove-Item -Path Function:_OLD_VIRTUAL_PROMPT + } + + # The prior PYTHONHOME: + if (Test-Path -Path Env:_OLD_VIRTUAL_PYTHONHOME) { + Copy-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME -Destination Env:PYTHONHOME + Remove-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME + } + + # The prior PATH: + if (Test-Path -Path Env:_OLD_VIRTUAL_PATH) { + Copy-Item -Path Env:_OLD_VIRTUAL_PATH -Destination Env:PATH + Remove-Item -Path Env:_OLD_VIRTUAL_PATH + } + + # Just remove the VIRTUAL_ENV altogether: + if (Test-Path -Path Env:VIRTUAL_ENV) { + Remove-Item -Path env:VIRTUAL_ENV + } + + # Just remove the _PYTHON_VENV_PROMPT_PREFIX altogether: + if (Get-Variable -Name "_PYTHON_VENV_PROMPT_PREFIX" -ErrorAction SilentlyContinue) { + Remove-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Scope Global -Force + } + + # Leave deactivate function in the global namespace if requested: + if (-not $NonDestructive) { + Remove-Item -Path function:deactivate + } +} + +<# +.Description +Get-PyVenvConfig parses the values from the pyvenv.cfg file located in the +given folder, and returns them in a map. + +For each line in the pyvenv.cfg file, if that line can be parsed into exactly +two strings separated by `=` (with any amount of whitespace surrounding the =) +then it is considered a `key = value` line. The left hand string is the key, +the right hand is the value. + +If the value starts with a `'` or a `"` then the first and last character is +stripped from the value before being captured. + +.Parameter ConfigDir +Path to the directory that contains the `pyvenv.cfg` file. +#> +function Get-PyVenvConfig( + [String] + $ConfigDir +) { + Write-Verbose "Given ConfigDir=$ConfigDir, obtain values in pyvenv.cfg" + + # Ensure the file exists, and issue a warning if it doesn't (but still allow the function to continue). + $pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPath 'pyvenv.cfg' -ErrorAction Continue + + # An empty map will be returned if no config file is found. + $pyvenvConfig = @{ } + + if ($pyvenvConfigPath) { + + Write-Verbose "File exists, parse `key = value` lines" + $pyvenvConfigContent = Get-Content -Path $pyvenvConfigPath + + $pyvenvConfigContent | ForEach-Object { + $keyval = $PSItem -split "\s*=\s*", 2 + if ($keyval[0] -and $keyval[1]) { + $val = $keyval[1] + + # Remove extraneous quotations around a string value. + if ("'""".Contains($val.Substring(0, 1))) { + $val = $val.Substring(1, $val.Length - 2) + } + + $pyvenvConfig[$keyval[0]] = $val + Write-Verbose "Adding Key: '$($keyval[0])'='$val'" + } + } + } + return $pyvenvConfig +} + + +<# Begin Activate script --------------------------------------------------- #> + +# Determine the containing directory of this script +$VenvExecPath = Split-Path -Parent $MyInvocation.MyCommand.Definition +$VenvExecDir = Get-Item -Path $VenvExecPath + +Write-Verbose "Activation script is located in path: '$VenvExecPath'" +Write-Verbose "VenvExecDir Fullname: '$($VenvExecDir.FullName)" +Write-Verbose "VenvExecDir Name: '$($VenvExecDir.Name)" + +# Set values required in priority: CmdLine, ConfigFile, Default +# First, get the location of the virtual environment, it might not be +# VenvExecDir if specified on the command line. +if ($VenvDir) { + Write-Verbose "VenvDir given as parameter, using '$VenvDir' to determine values" +} +else { + Write-Verbose "VenvDir not given as a parameter, using parent directory name as VenvDir." + $VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/") + Write-Verbose "VenvDir=$VenvDir" +} + +# Next, read the `pyvenv.cfg` file to determine any required value such +# as `prompt`. +$pyvenvCfg = Get-PyVenvConfig -ConfigDir $VenvDir + +# Next, set the prompt from the command line, or the config file, or +# just use the name of the virtual environment folder. +if ($Prompt) { + Write-Verbose "Prompt specified as argument, using '$Prompt'" +} +else { + Write-Verbose "Prompt not specified as argument to script, checking pyvenv.cfg value" + if ($pyvenvCfg -and $pyvenvCfg['prompt']) { + Write-Verbose " Setting based on value in pyvenv.cfg='$($pyvenvCfg['prompt'])'" + $Prompt = $pyvenvCfg['prompt']; + } + else { + Write-Verbose " Setting prompt based on parent's directory's name. (Is the directory name passed to venv module when creating the virutal environment)" + Write-Verbose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)'" + $Prompt = Split-Path -Path $venvDir -Leaf + } +} + +Write-Verbose "Prompt = '$Prompt'" +Write-Verbose "VenvDir='$VenvDir'" + +# Deactivate any currently active virtual environment, but leave the +# deactivate function in place. +deactivate -nondestructive + +# Now set the environment variable VIRTUAL_ENV, used by many tools to determine +# that there is an activated venv. +$env:VIRTUAL_ENV = $VenvDir + +if (-not $Env:VIRTUAL_ENV_DISABLE_PROMPT) { + + Write-Verbose "Setting prompt to '$Prompt'" + + # Set the prompt to include the env name + # Make sure _OLD_VIRTUAL_PROMPT is global + function global:_OLD_VIRTUAL_PROMPT { "" } + Copy-Item -Path function:prompt -Destination function:_OLD_VIRTUAL_PROMPT + New-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Description "Python virtual environment prompt prefix" -Scope Global -Option ReadOnly -Visibility Public -Value $Prompt + + function global:prompt { + Write-Host -NoNewline -ForegroundColor Green "($_PYTHON_VENV_PROMPT_PREFIX) " + _OLD_VIRTUAL_PROMPT + } +} + +# Clear PYTHONHOME +if (Test-Path -Path Env:PYTHONHOME) { + Copy-Item -Path Env:PYTHONHOME -Destination Env:_OLD_VIRTUAL_PYTHONHOME + Remove-Item -Path Env:PYTHONHOME +} + +# Add the venv to the PATH +Copy-Item -Path Env:PATH -Destination Env:_OLD_VIRTUAL_PATH +$Env:PATH = "$VenvExecDir$([System.IO.Path]::PathSeparator)$Env:PATH" diff -r 000000000000 -r 4f3585e2f14b env/bin/__pycache__/bagit.cpython-39.pyc Binary file env/bin/__pycache__/bagit.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/__pycache__/rst2html.cpython-39.pyc Binary file env/bin/__pycache__/rst2html.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/__pycache__/rst2html4.cpython-39.pyc Binary file env/bin/__pycache__/rst2html4.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/__pycache__/rst2html5.cpython-39.pyc Binary file env/bin/__pycache__/rst2html5.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/__pycache__/rst2latex.cpython-39.pyc Binary file env/bin/__pycache__/rst2latex.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/__pycache__/rst2man.cpython-39.pyc Binary file env/bin/__pycache__/rst2man.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/__pycache__/rst2odt.cpython-39.pyc Binary file env/bin/__pycache__/rst2odt.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/__pycache__/rst2odt_prepstyles.cpython-39.pyc Binary file env/bin/__pycache__/rst2odt_prepstyles.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/__pycache__/rst2pseudoxml.cpython-39.pyc Binary file env/bin/__pycache__/rst2pseudoxml.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/__pycache__/rst2s5.cpython-39.pyc Binary file env/bin/__pycache__/rst2s5.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/__pycache__/rst2xetex.cpython-39.pyc Binary file env/bin/__pycache__/rst2xetex.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/__pycache__/rst2xml.cpython-39.pyc Binary file env/bin/__pycache__/rst2xml.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/__pycache__/rstpep2html.cpython-39.pyc Binary file env/bin/__pycache__/rstpep2html.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/activate --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/activate Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,66 @@ +# This file must be used with "source bin/activate" *from bash* +# you cannot run it directly + +deactivate () { + # reset old environment variables + if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then + PATH="${_OLD_VIRTUAL_PATH:-}" + export PATH + unset _OLD_VIRTUAL_PATH + fi + if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then + PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}" + export PYTHONHOME + unset _OLD_VIRTUAL_PYTHONHOME + fi + + # This should detect bash and zsh, which have a hash command that must + # be called to get it to forget past commands. Without forgetting + # past commands the $PATH changes we made may not be respected + if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then + hash -r 2> /dev/null + fi + + if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then + PS1="${_OLD_VIRTUAL_PS1:-}" + export PS1 + unset _OLD_VIRTUAL_PS1 + fi + + unset VIRTUAL_ENV + if [ ! "${1:-}" = "nondestructive" ] ; then + # Self destruct! + unset -f deactivate + fi +} + +# unset irrelevant variables +deactivate nondestructive + +VIRTUAL_ENV="/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env" +export VIRTUAL_ENV + +_OLD_VIRTUAL_PATH="$PATH" +PATH="$VIRTUAL_ENV/bin:$PATH" +export PATH + +# unset PYTHONHOME if set +# this will fail if PYTHONHOME is set to the empty string (which is bad anyway) +# could use `if (set -u; : $PYTHONHOME) ;` in bash +if [ -n "${PYTHONHOME:-}" ] ; then + _OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}" + unset PYTHONHOME +fi + +if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then + _OLD_VIRTUAL_PS1="${PS1:-}" + PS1="(env) ${PS1:-}" + export PS1 +fi + +# This should detect bash and zsh, which have a hash command that must +# be called to get it to forget past commands. Without forgetting +# past commands the $PATH changes we made may not be respected +if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then + hash -r 2> /dev/null +fi diff -r 000000000000 -r 4f3585e2f14b env/bin/activate-global-python-argcomplete --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/activate-global-python-argcomplete Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,76 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# PYTHON_ARGCOMPLETE_OK + +# Copyright 2012-2019, Andrey Kislyuk and argcomplete contributors. +# Licensed under the Apache License. See https://github.com/kislyuk/argcomplete for more info. + +''' +Activate the generic bash-completion script for the argcomplete module. +''' + +import os, sys, argparse, argcomplete, shutil, fileinput + +parser = argparse.ArgumentParser(description=__doc__, + formatter_class=argparse.RawDescriptionHelpFormatter) + +dest_opt = parser.add_argument("--dest", default="/etc/bash_completion.d", + help="Specify the bash completion modules directory to install into") +parser.add_argument("--user", help="Install into user directory (~/.bash_completion.d/)", action='store_true') +parser.add_argument("--no-defaults", dest="use_defaults", action="store_false", default=True, + help="When no matches are generated, do not fallback to readline\'s default completion") +parser.add_argument("--complete-arguments", nargs=argparse.REMAINDER, + help="arguments to call complete with; use of this option discards default options") +argcomplete.autocomplete(parser) +args = parser.parse_args() + +if args.user: + args.dest = os.path.expanduser("~/.bash_completion.d/") + if not os.path.exists(args.dest): + try: + os.mkdir(args.dest) + except Exception as e: + parser.error("Path {d} does not exist and could not be created: {e}".format(d=args.dest, e=e)) +elif not os.path.exists(args.dest) and args.dest != '-': + if sys.platform == 'darwin' and args.dest == dest_opt.default and os.path.exists("/usr/local" + dest_opt.default): + args.dest = "/usr/local" + dest_opt.default + else: + parser.error("Path {d} does not exist".format(d=args.dest)) + +activator = os.path.join(os.path.dirname(argcomplete.__file__), 'bash_completion.d', 'python-argcomplete') + +if args.complete_arguments is None: + complete_options = '-o default -o bashdefault' if args.use_defaults else '-o bashdefault' +else: + complete_options = " ".join(args.complete_arguments) +complete_call = "complete{} -D -F _python_argcomplete_global".format(" " + complete_options if complete_options else "") +def replaceCompleteCall(line): + if line.startswith("complete") and "_python_argcomplete_global" in line: + return complete_call + ('\n' if line.endswith('\n') else '') + else: + return line + +if args.dest == '-': + for l in open(activator): + sys.stdout.write(replaceCompleteCall(l)) +else: + dest = os.path.join(args.dest, "python-argcomplete") + + sys.stdout.write("Installing bash completion script " + dest) + if not args.use_defaults: + sys.stdout.write(" without -o default") + elif args.complete_arguments: + sys.stdout.write(" with options: " + complete_options) + sys.stdout.write("\n") + + try: + shutil.copy(activator, dest) + if args.complete_arguments or not args.use_defaults: + for l in fileinput.input(dest, inplace=True): + # fileinput with inplace=True redirects stdout to the edited file + sys.stdout.write(replaceCompleteCall(l)) + except Exception as e: + err = str(e) + if args.dest == dest_opt.default: + err += ("\nPlease try --user to install into a user directory, " + "or --dest to specify the bash completion modules directory") + parser.error(err) diff -r 000000000000 -r 4f3585e2f14b env/bin/activate.csh --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/activate.csh Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,25 @@ +# This file must be used with "source bin/activate.csh" *from csh*. +# You cannot run it directly. +# Created by Davide Di Blasi . +# Ported to Python 3.3 venv by Andrew Svetlov + +alias deactivate 'test $?_OLD_VIRTUAL_PATH != 0 && setenv PATH "$_OLD_VIRTUAL_PATH" && unset _OLD_VIRTUAL_PATH; rehash; test $?_OLD_VIRTUAL_PROMPT != 0 && set prompt="$_OLD_VIRTUAL_PROMPT" && unset _OLD_VIRTUAL_PROMPT; unsetenv VIRTUAL_ENV; test "\!:*" != "nondestructive" && unalias deactivate' + +# Unset irrelevant variables. +deactivate nondestructive + +setenv VIRTUAL_ENV "/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env" + +set _OLD_VIRTUAL_PATH="$PATH" +setenv PATH "$VIRTUAL_ENV/bin:$PATH" + + +set _OLD_VIRTUAL_PROMPT="$prompt" + +if (! "$?VIRTUAL_ENV_DISABLE_PROMPT") then + set prompt = "(env) $prompt" +endif + +alias pydoc python -m pydoc + +rehash diff -r 000000000000 -r 4f3585e2f14b env/bin/activate.fish --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/activate.fish Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,64 @@ +# This file must be used with "source /bin/activate.fish" *from fish* +# (https://fishshell.com/); you cannot run it directly. + +function deactivate -d "Exit virtual environment and return to normal shell environment" + # reset old environment variables + if test -n "$_OLD_VIRTUAL_PATH" + set -gx PATH $_OLD_VIRTUAL_PATH + set -e _OLD_VIRTUAL_PATH + end + if test -n "$_OLD_VIRTUAL_PYTHONHOME" + set -gx PYTHONHOME $_OLD_VIRTUAL_PYTHONHOME + set -e _OLD_VIRTUAL_PYTHONHOME + end + + if test -n "$_OLD_FISH_PROMPT_OVERRIDE" + functions -e fish_prompt + set -e _OLD_FISH_PROMPT_OVERRIDE + functions -c _old_fish_prompt fish_prompt + functions -e _old_fish_prompt + end + + set -e VIRTUAL_ENV + if test "$argv[1]" != "nondestructive" + # Self-destruct! + functions -e deactivate + end +end + +# Unset irrelevant variables. +deactivate nondestructive + +set -gx VIRTUAL_ENV "/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env" + +set -gx _OLD_VIRTUAL_PATH $PATH +set -gx PATH "$VIRTUAL_ENV/bin" $PATH + +# Unset PYTHONHOME if set. +if set -q PYTHONHOME + set -gx _OLD_VIRTUAL_PYTHONHOME $PYTHONHOME + set -e PYTHONHOME +end + +if test -z "$VIRTUAL_ENV_DISABLE_PROMPT" + # fish uses a function instead of an env var to generate the prompt. + + # Save the current fish_prompt function as the function _old_fish_prompt. + functions -c fish_prompt _old_fish_prompt + + # With the original prompt function renamed, we can override with our own. + function fish_prompt + # Save the return status of the last command. + set -l old_status $status + + # Output the venv prompt; color taken from the blue of the Python logo. + printf "%s%s%s" (set_color 4B8BBE) "(env) " (set_color normal) + + # Restore the return status of the previous command. + echo "exit $old_status" | . + # Output the original/"old" prompt. + _old_fish_prompt + end + + set -gx _OLD_FISH_PROMPT_OVERRIDE "$VIRTUAL_ENV" +end diff -r 000000000000 -r 4f3585e2f14b env/bin/asadmin --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/asadmin Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,290 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# Copyright (c) 2011 Joel Barciauskas http://joel.barciausk.as/ +# +# Permission is hereby granted, free of charge, to any person obtaining a +# copy of this software and associated documentation files (the +# "Software"), to deal in the Software without restriction, including +# without limitation the rights to use, copy, modify, merge, publish, dis- +# tribute, sublicense, and/or sell copies of the Software, and to permit +# persons to whom the Software is furnished to do so, subject to the fol- +# lowing conditions: +# +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- +# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + +# +# Auto Scaling Groups Tool +# +VERSION="0.1" +usage = """%prog [options] [command] +Commands: + list|ls List all Auto Scaling Groups + list-lc|ls-lc List all Launch Configurations + delete Delete ASG + delete-lc Delete Launch Configuration + get Get details of ASG + create Create an ASG + create-lc Create a Launch Configuration + update Update a property of an ASG + update-image Update image ID for ASG by creating a new LC + migrate-instances Shut down current instances one by one and wait for ASG to start up a new instance with the current AMI (useful in conjunction with update-image) + +Examples: + + 1) Create launch configuration + bin/asadmin create-lc my-lc-1 -i ami-1234abcd -t c1.xlarge -k my-key -s web-group -m + + 2) Create auto scaling group in us-east-1a and us-east-1c with a load balancer and min size of 2 and max size of 6 + bin/asadmin create my-asg -z us-east-1a -z us-east-1c -l my-lc-1 -b my-lb -H ELB -p 180 -x 2 -X 6 +""" + +def get_group(autoscale, name): + g = autoscale.get_all_groups(names=[name]) + if len(g) < 1: + print "No auto scaling groups by the name of %s found" % name + return sys.exit(1) + return g[0] + +def get_lc(autoscale, name): + l = autoscale.get_all_launch_configurations(names=[name]) + if len(l) < 1: + print "No launch configurations by the name of %s found" % name + sys.exit(1) + return l[0] + +def list(autoscale): + """List all ASGs""" + print "%-20s %s" % ("Name", "LC Name") + print "-"*80 + groups = autoscale.get_all_groups() + for g in groups: + print "%-20s %s" % (g.name, g.launch_config_name) + +def list_lc(autoscale): + """List all LCs""" + print "%-30s %-20s %s" % ("Name", "Image ID", "Instance Type") + print "-"*80 + for l in autoscale.get_all_launch_configurations(): + print "%-30s %-20s %s" % (l.name, l.image_id, l.instance_type) + +def get(autoscale, name): + """Get details about ASG """ + g = get_group(autoscale, name) + print "="*80 + print "%-30s %s" % ('Name:', g.name) + print "%-30s %s" % ('Launch configuration:', g.launch_config_name) + print "%-30s %s" % ('Minimum size:', g.min_size) + print "%-30s %s" % ('Maximum size:', g.max_size) + print "%-30s %s" % ('Desired capacity:', g.desired_capacity) + print "%-30s %s" % ('Load balancers:', ','.join(g.load_balancers)) + + print + + print "Instances" + print "---------" + print "%-20s %-20s %-20s %s" % ("ID", "Status", "Health", "AZ") + for i in g.instances: + print "%-20s %-20s %-20s %s" % \ + (i.instance_id, i.lifecycle_state, i.health_status, i.availability_zone) + + print + +def create(autoscale, name, zones, lc_name, load_balancers, hc_type, hc_period, + min_size, max_size, cooldown, capacity): + """Create an ASG named """ + g = AutoScalingGroup(name=name, launch_config=lc_name, + availability_zones=zones, load_balancers=load_balancers, + default_cooldown=cooldown, health_check_type=hc_type, + health_check_period=hc_period, desired_capacity=capacity, + min_size=min_size, max_size=max_size) + g = autoscale.create_auto_scaling_group(g) + return list(autoscale) + +def create_lc(autoscale, name, image_id, instance_type, key_name, + security_groups, instance_monitoring): + l = LaunchConfiguration(name=name, image_id=image_id, + instance_type=instance_type,key_name=key_name, + security_groups=security_groups, + instance_monitoring=instance_monitoring) + l = autoscale.create_launch_configuration(l) + return list_lc(autoscale) + +def update(autoscale, name, prop, value): + g = get_group(autoscale, name) + setattr(g, prop, value) + g.update() + return get(autoscale, name) + +def delete(autoscale, name, force_delete=False): + """Delete this ASG""" + g = get_group(autoscale, name) + autoscale.delete_auto_scaling_group(g.name, force_delete) + print "Auto scaling group %s deleted" % name + return list(autoscale) + +def delete_lc(autoscale, name): + """Delete this LC""" + l = get_lc(autoscale, name) + autoscale.delete_launch_configuration(name) + print "Launch configuration %s deleted" % name + return list_lc(autoscale) + +def update_image(autoscale, name, lc_name, image_id, is_migrate_instances=False): + """ Get the current launch config, + Update its name and image id + Re-create it as a new launch config + Update the ASG with the new LC + Delete the old LC """ + + g = get_group(autoscale, name) + l = get_lc(autoscale, g.launch_config_name) + + old_lc_name = l.name + l.name = lc_name + l.image_id = image_id + autoscale.create_launch_configuration(l) + g.launch_config_name = l.name + g.update() + + if(is_migrate_instances): + migrate_instances(autoscale, name) + else: + return get(autoscale, name) + +def migrate_instances(autoscale, name): + """ Shut down instances of the old image type one by one + and let the ASG start up instances with the new image """ + g = get_group(autoscale, name) + + old_instances = g.instances + ec2 = boto.connect_ec2() + for old_instance in old_instances: + print "Terminating instance " + old_instance.instance_id + ec2.terminate_instances([old_instance.instance_id]) + while True: + g = get_group(autoscale, name) + new_instances = g.instances + for new_instance in new_instances: + hasOldInstance = False + instancesReady = True + if(old_instance.instance_id == new_instance.instance_id): + hasOldInstance = True + print "Waiting for old instance to shut down..." + break + elif(new_instance.lifecycle_state != 'InService'): + instancesReady = False + print "Waiting for instances to be ready...." + break + if(not hasOldInstance and instancesReady): + break + else: + time.sleep(20) + return get(autoscale, name) + +if __name__ == "__main__": + try: + import readline + except ImportError: + pass + import boto + import sys + import time + from optparse import OptionParser + from boto.mashups.iobject import IObject + from boto.ec2.autoscale import AutoScalingGroup + from boto.ec2.autoscale import LaunchConfiguration + parser = OptionParser(version=VERSION, usage=usage) + """ Create launch config options """ + parser.add_option("-i", "--image-id", + help="Image (AMI) ID", action="store", + type="string", default=None, dest="image_id") + parser.add_option("-t", "--instance-type", + help="EC2 Instance Type (e.g., m1.large, c1.xlarge), default is m1.large", + action="store", type="string", default="m1.large", dest="instance_type") + parser.add_option("-k", "--key-name", + help="EC2 Key Name", + action="store", type="string", dest="key_name") + parser.add_option("-s", "--security-group", + help="EC2 Security Group", + action="append", default=[], dest="security_groups") + parser.add_option("-m", "--monitoring", + help="Enable instance monitoring", + action="store_true", default=False, dest="instance_monitoring") + + """ Create auto scaling group options """ + parser.add_option("-z", "--zone", help="Add availability zone", action="append", default=[], dest="zones") + parser.add_option("-l", "--lc-name", + help="Launch configuration name", + action="store", default=None, type="string", dest="lc_name") + parser.add_option("-b", "--load-balancer", + help="Load balancer name", + action="append", default=[], dest="load_balancers") + parser.add_option("-H", "--health-check-type", + help="Health check type (EC2 or ELB)", + action="store", default="EC2", type="string", dest="hc_type") + parser.add_option("-p", "--health-check-period", + help="Health check period in seconds (default 300s)", + action="store", default=300, type="int", dest="hc_period") + parser.add_option("-X", "--max-size", + help="Max size of ASG (default 10)", + action="store", default=10, type="int", dest="max_size") + parser.add_option("-x", "--min-size", + help="Min size of ASG (default 2)", + action="store", default=2, type="int", dest="min_size") + parser.add_option("-c", "--cooldown", + help="Cooldown time after a scaling activity in seconds (default 300s)", + action="store", default=300, type="int", dest="cooldown") + parser.add_option("-C", "--desired-capacity", + help="Desired capacity of the ASG", + action="store", default=None, type="int", dest="capacity") + parser.add_option("-f", "--force", + help="Force delete ASG", + action="store_true", default=False, dest="force") + parser.add_option("-y", "--migrate-instances", + help="Automatically migrate instances to new image when running update-image", + action="store_true", default=False, dest="migrate_instances") + + (options, args) = parser.parse_args() + + if len(args) < 1: + parser.print_help() + sys.exit(1) + + autoscale = boto.connect_autoscale() + + print "%s" % (autoscale.region.endpoint) + + command = args[0].lower() + if command in ("ls", "list"): + list(autoscale) + elif command in ("ls-lc", "list-lc"): + list_lc(autoscale) + elif command == "get": + get(autoscale, args[1]) + elif command == "create": + create(autoscale, args[1], options.zones, options.lc_name, + options.load_balancers, options.hc_type, + options.hc_period, options.min_size, options.max_size, + options.cooldown, options.capacity) + elif command == "create-lc": + create_lc(autoscale, args[1], options.image_id, options.instance_type, + options.key_name, options.security_groups, + options.instance_monitoring) + elif command == "update": + update(autoscale, args[1], args[2], args[3]) + elif command == "delete": + delete(autoscale, args[1], options.force) + elif command == "delete-lc": + delete_lc(autoscale, args[1]) + elif command == "update-image": + update_image(autoscale, args[1], args[2], + options.image_id, options.migrate_instances) + elif command == "migrate-instances": + migrate_instances(autoscale, args[1]) diff -r 000000000000 -r 4f3585e2f14b env/bin/bagit.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/bagit.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,1617 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- + +from __future__ import absolute_import, division, print_function, unicode_literals + +import argparse +import codecs +import gettext +import hashlib +import logging +import multiprocessing +import os +import re +import signal +import sys +import tempfile +import unicodedata +import warnings +from collections import defaultdict +from datetime import date +from functools import partial +from os.path import abspath, isdir, isfile, join + +from pkg_resources import DistributionNotFound, get_distribution + +try: + from urllib.parse import urlparse +except ImportError: + from urlparse import urlparse + + +def find_locale_dir(): + for prefix in (os.path.dirname(__file__), sys.prefix): + locale_dir = os.path.join(prefix, "locale") + if os.path.isdir(locale_dir): + return locale_dir + + +TRANSLATION_CATALOG = gettext.translation( + "bagit-python", localedir=find_locale_dir(), fallback=True +) +if sys.version_info < (3,): + _ = TRANSLATION_CATALOG.ugettext +else: + _ = TRANSLATION_CATALOG.gettext + +MODULE_NAME = "bagit" if __name__ == "__main__" else __name__ + +LOGGER = logging.getLogger(MODULE_NAME) + +try: + VERSION = get_distribution(MODULE_NAME).version +except DistributionNotFound: + VERSION = "0.0.dev0" + +PROJECT_URL = "https://github.com/LibraryOfCongress/bagit-python" + +__doc__ = ( + _( + """ +BagIt is a directory, filename convention for bundling an arbitrary set of +files with a manifest, checksums, and additional metadata. More about BagIt +can be found at: + + http://purl.org/net/bagit + +bagit.py is a pure python drop in library and command line tool for creating, +and working with BagIt directories. + + +Command-Line Usage: + +Basic usage is to give bagit.py a directory to bag up: + + $ bagit.py my_directory + +This does a bag-in-place operation where the current contents will be moved +into the appropriate BagIt structure and the metadata files will be created. + +You can bag multiple directories if you wish: + + $ bagit.py directory1 directory2 + +Optionally you can provide metadata which will be stored in bag-info.txt: + + $ bagit.py --source-organization "Library of Congress" directory + +You can also select which manifest algorithms will be used: + + $ bagit.py --sha1 --md5 --sha256 --sha512 directory + + +Using BagIt from your Python code: + + import bagit + bag = bagit.make_bag('example-directory', {'Contact-Name': 'Ed Summers'}) + print(bag.entries) + +For more information or to contribute to bagit-python's development, please +visit %(PROJECT_URL)s +""" + ) + % globals() +) + +# standard bag-info.txt metadata +STANDARD_BAG_INFO_HEADERS = [ + "Source-Organization", + "Organization-Address", + "Contact-Name", + "Contact-Phone", + "Contact-Email", + "External-Description", + "External-Identifier", + "Bag-Size", + "Bag-Group-Identifier", + "Bag-Count", + "Internal-Sender-Identifier", + "Internal-Sender-Description", + "BagIt-Profile-Identifier", + # Bagging-Date is autogenerated + # Payload-Oxum is autogenerated +] + +try: + CHECKSUM_ALGOS = hashlib.algorithms_guaranteed +except AttributeError: + # FIXME: remove when we drop Python 2 (https://github.com/LibraryOfCongress/bagit-python/issues/102) + # Python 2.7.0-2.7.8 + CHECKSUM_ALGOS = set(hashlib.algorithms) +DEFAULT_CHECKSUMS = ["sha256", "sha512"] + +#: Block size used when reading files for hashing: +HASH_BLOCK_SIZE = 512 * 1024 + +#: Convenience function used everywhere we want to open a file to read text +#: rather than undecoded bytes: +open_text_file = partial(codecs.open, encoding="utf-8", errors="strict") + +# This is the same as decoding the byte values in codecs.BOM: +UNICODE_BYTE_ORDER_MARK = "\uFEFF" + + +def make_bag( + bag_dir, bag_info=None, processes=1, checksums=None, checksum=None, encoding="utf-8" +): + """ + Convert a given directory into a bag. You can pass in arbitrary + key/value pairs to put into the bag-info.txt metadata file as + the bag_info dictionary. + """ + + if checksum is not None: + warnings.warn( + _( + "The `checksum` argument for `make_bag` should be replaced with `checksums`" + ), + DeprecationWarning, + ) + checksums = checksum + + if checksums is None: + checksums = DEFAULT_CHECKSUMS + + bag_dir = os.path.abspath(bag_dir) + cwd = os.path.abspath(os.path.curdir) + + if cwd.startswith(bag_dir) and cwd != bag_dir: + raise RuntimeError( + _("Bagging a parent of the current directory is not supported") + ) + + LOGGER.info(_("Creating bag for directory %s"), bag_dir) + + if not os.path.isdir(bag_dir): + LOGGER.error(_("Bag directory %s does not exist"), bag_dir) + raise RuntimeError(_("Bag directory %s does not exist") % bag_dir) + + # FIXME: we should do the permissions checks before changing directories + old_dir = os.path.abspath(os.path.curdir) + + try: + # TODO: These two checks are currently redundant since an unreadable directory will also + # often be unwritable, and this code will require review when we add the option to + # bag to a destination other than the source. It would be nice if we could avoid + # walking the directory tree more than once even if most filesystems will cache it + + unbaggable = _can_bag(bag_dir) + + if unbaggable: + LOGGER.error( + _("Unable to write to the following directories and files:\n%s"), + unbaggable, + ) + raise BagError(_("Missing permissions to move all files and directories")) + + unreadable_dirs, unreadable_files = _can_read(bag_dir) + + if unreadable_dirs or unreadable_files: + if unreadable_dirs: + LOGGER.error( + _("The following directories do not have read permissions:\n%s"), + unreadable_dirs, + ) + if unreadable_files: + LOGGER.error( + _("The following files do not have read permissions:\n%s"), + unreadable_files, + ) + raise BagError( + _("Read permissions are required to calculate file fixities") + ) + else: + LOGGER.info(_("Creating data directory")) + + # FIXME: if we calculate full paths we won't need to deal with changing directories + os.chdir(bag_dir) + cwd = os.getcwd() + temp_data = tempfile.mkdtemp(dir=cwd) + + for f in os.listdir("."): + if os.path.abspath(f) == temp_data: + continue + new_f = os.path.join(temp_data, f) + LOGGER.info( + _("Moving %(source)s to %(destination)s"), + {"source": f, "destination": new_f}, + ) + os.rename(f, new_f) + + LOGGER.info( + _("Moving %(source)s to %(destination)s"), + {"source": temp_data, "destination": "data"}, + ) + os.rename(temp_data, "data") + + # permissions for the payload directory should match those of the + # original directory + os.chmod("data", os.stat(cwd).st_mode) + + total_bytes, total_files = make_manifests( + "data", processes, algorithms=checksums, encoding=encoding + ) + + LOGGER.info(_("Creating bagit.txt")) + txt = """BagIt-Version: 0.97\nTag-File-Character-Encoding: UTF-8\n""" + with open_text_file("bagit.txt", "w") as bagit_file: + bagit_file.write(txt) + + LOGGER.info(_("Creating bag-info.txt")) + if bag_info is None: + bag_info = {} + + # allow 'Bagging-Date' and 'Bag-Software-Agent' to be overidden + if "Bagging-Date" not in bag_info: + bag_info["Bagging-Date"] = date.strftime(date.today(), "%Y-%m-%d") + if "Bag-Software-Agent" not in bag_info: + bag_info["Bag-Software-Agent"] = "bagit.py v%s <%s>" % ( + VERSION, + PROJECT_URL, + ) + + bag_info["Payload-Oxum"] = "%s.%s" % (total_bytes, total_files) + _make_tag_file("bag-info.txt", bag_info) + + for c in checksums: + _make_tagmanifest_file(c, bag_dir, encoding="utf-8") + except Exception: + LOGGER.exception(_("An error occurred creating a bag in %s"), bag_dir) + raise + finally: + os.chdir(old_dir) + + return Bag(bag_dir) + + +class Bag(object): + """A representation of a bag.""" + + valid_files = ["bagit.txt", "fetch.txt"] + valid_directories = ["data"] + + def __init__(self, path=None): + super(Bag, self).__init__() + self.tags = {} + self.info = {} + #: Dictionary of manifest entries and the checksum values for each + #: algorithm: + self.entries = {} + + # To reliably handle Unicode normalization differences, we maintain + # lookup dictionaries in both directions for the filenames read from + # the filesystem and the manifests so we can handle cases where the + # normalization form changed between the bag being created and read. + # See https://github.com/LibraryOfCongress/bagit-python/issues/51. + + #: maps Unicode-normalized values to the raw value from the filesystem + self.normalized_filesystem_names = {} + + #: maps Unicode-normalized values to the raw value in the manifest + self.normalized_manifest_names = {} + + self.algorithms = [] + self.tag_file_name = None + self.path = abspath(path) + if path: + # if path ends in a path separator, strip it off + if path[-1] == os.sep: + self.path = path[:-1] + self._open() + + def __str__(self): + # FIXME: develop a more informative string representation for a Bag + return self.path + + @property + def algs(self): + warnings.warn(_("Use Bag.algorithms instead of Bag.algs"), DeprecationWarning) + return self.algorithms + + @property + def version(self): + warnings.warn( + _("Use the Bag.version_info tuple instead of Bag.version"), + DeprecationWarning, + ) + return self._version + + def _open(self): + # Open the bagit.txt file, and load any tags from it, including + # the required version and encoding. + bagit_file_path = os.path.join(self.path, "bagit.txt") + + if not isfile(bagit_file_path): + raise BagError(_("Expected bagit.txt does not exist: %s") % bagit_file_path) + + self.tags = tags = _load_tag_file(bagit_file_path) + + required_tags = ("BagIt-Version", "Tag-File-Character-Encoding") + missing_tags = [i for i in required_tags if i not in tags] + if missing_tags: + raise BagError( + _("Missing required tag in bagit.txt: %s") % ", ".join(missing_tags) + ) + + # To avoid breaking existing code we'll leave self.version as the string + # and parse it into a numeric version_info tuple. In version 2.0 we can + # break that. + + self._version = tags["BagIt-Version"] + + try: + self.version_info = tuple(int(i) for i in self._version.split(".", 1)) + except ValueError: + raise BagError( + _("Bag version numbers must be MAJOR.MINOR numbers, not %s") + % self._version + ) + + if (0, 93) <= self.version_info <= (0, 95): + self.tag_file_name = "package-info.txt" + elif (0, 96) <= self.version_info < (2,): + self.tag_file_name = "bag-info.txt" + else: + raise BagError(_("Unsupported bag version: %s") % self._version) + + self.encoding = tags["Tag-File-Character-Encoding"] + + try: + codecs.lookup(self.encoding) + except LookupError: + raise BagValidationError(_("Unsupported encoding: %s") % self.encoding) + + info_file_path = os.path.join(self.path, self.tag_file_name) + if os.path.exists(info_file_path): + self.info = _load_tag_file(info_file_path, encoding=self.encoding) + + self._load_manifests() + + def manifest_files(self): + for filename in ["manifest-%s.txt" % a for a in CHECKSUM_ALGOS]: + f = os.path.join(self.path, filename) + if isfile(f): + yield f + + def tagmanifest_files(self): + for filename in ["tagmanifest-%s.txt" % a for a in CHECKSUM_ALGOS]: + f = os.path.join(self.path, filename) + if isfile(f): + yield f + + def compare_manifests_with_fs(self): + """ + Compare the filenames in the manifests to the filenames present on the + local filesystem and returns two lists of the files which are only + present in the manifests and the files which are only present on the + local filesystem, respectively. + """ + + # We compare the filenames after Unicode normalization so we can + # reliably detect normalization changes after bag creation: + files_on_fs = set(normalize_unicode(i) for i in self.payload_files()) + files_in_manifest = set( + normalize_unicode(i) for i in self.payload_entries().keys() + ) + + if self.version_info >= (0, 97): + files_in_manifest.update(self.missing_optional_tagfiles()) + + only_on_fs = list() + only_in_manifest = list() + + for i in files_on_fs.difference(files_in_manifest): + only_on_fs.append(self.normalized_filesystem_names[i]) + + for i in files_in_manifest.difference(files_on_fs): + only_in_manifest.append(self.normalized_manifest_names[i]) + + return only_in_manifest, only_on_fs + + def compare_fetch_with_fs(self): + """Compares the fetch entries with the files actually + in the payload, and returns a list of all the files + that still need to be fetched. + """ + + files_on_fs = set(self.payload_files()) + files_in_fetch = set(self.files_to_be_fetched()) + + return list(files_in_fetch - files_on_fs) + + def payload_files(self): + """Returns a list of filenames which are present on the local filesystem""" + payload_dir = os.path.join(self.path, "data") + + for dirpath, _, filenames in os.walk(payload_dir): + for f in filenames: + # Jump through some hoops here to make the payload files are + # returned with the directory structure relative to the base + # directory rather than the + normalized_f = os.path.normpath(f) + rel_path = os.path.relpath( + os.path.join(dirpath, normalized_f), start=self.path + ) + + self.normalized_filesystem_names[normalize_unicode(rel_path)] = rel_path + yield rel_path + + def payload_entries(self): + """Return a dictionary of items """ + # Don't use dict comprehension (compatibility with Python < 2.7) + return dict( + (key, value) + for (key, value) in self.entries.items() + if key.startswith("data" + os.sep) + ) + + def save(self, processes=1, manifests=False): + """ + save will persist any changes that have been made to the bag + metadata (self.info). + + If you have modified the payload of the bag (added, modified, + removed files in the data directory) and want to regenerate manifests + set the manifests parameter to True. The default is False since you + wouldn't want a save to accidentally create a new manifest for + a corrupted bag. + + If you want to control the number of processes that are used when + recalculating checksums use the processes parameter. + """ + # Error checking + if not self.path: + raise BagError(_("Bag.save() called before setting the path!")) + + if not os.access(self.path, os.R_OK | os.W_OK | os.X_OK): + raise BagError( + _("Cannot save bag to non-existent or inaccessible directory %s") + % self.path + ) + + unbaggable = _can_bag(self.path) + if unbaggable: + LOGGER.error( + _( + "Missing write permissions for the following directories and files:\n%s" + ), + unbaggable, + ) + raise BagError(_("Missing permissions to move all files and directories")) + + unreadable_dirs, unreadable_files = _can_read(self.path) + if unreadable_dirs or unreadable_files: + if unreadable_dirs: + LOGGER.error( + _("The following directories do not have read permissions:\n%s"), + unreadable_dirs, + ) + if unreadable_files: + LOGGER.error( + _("The following files do not have read permissions:\n%s"), + unreadable_files, + ) + raise BagError( + _("Read permissions are required to calculate file fixities") + ) + + # Change working directory to bag directory so helper functions work + old_dir = os.path.abspath(os.path.curdir) + os.chdir(self.path) + + # Generate new manifest files + if manifests: + total_bytes, total_files = make_manifests( + "data", processes, algorithms=self.algorithms, encoding=self.encoding + ) + + # Update Payload-Oxum + LOGGER.info(_("Updating Payload-Oxum in %s"), self.tag_file_name) + self.info["Payload-Oxum"] = "%s.%s" % (total_bytes, total_files) + + _make_tag_file(self.tag_file_name, self.info) + + # Update tag-manifest for changes to manifest & bag-info files + for alg in self.algorithms: + _make_tagmanifest_file(alg, self.path, encoding=self.encoding) + + # Reload the manifests + self._load_manifests() + + os.chdir(old_dir) + + def tagfile_entries(self): + return dict( + (key, value) + for (key, value) in self.entries.items() + if not key.startswith("data" + os.sep) + ) + + def missing_optional_tagfiles(self): + """ + From v0.97 we need to validate any tagfiles listed + in the optional tagmanifest(s). As there is no mandatory + directory structure for additional tagfiles we can + only check for entries with missing files (not missing + entries for existing files). + """ + for tagfilepath in self.tagfile_entries().keys(): + if not os.path.isfile(os.path.join(self.path, tagfilepath)): + yield tagfilepath + + def fetch_entries(self): + """Load fetch.txt if present and iterate over its contents + + yields (url, size, filename) tuples + + raises BagError for errors such as an unsafe filename referencing + data outside of the bag directory + """ + + fetch_file_path = os.path.join(self.path, "fetch.txt") + + if isfile(fetch_file_path): + with open_text_file( + fetch_file_path, "r", encoding=self.encoding + ) as fetch_file: + for line in fetch_file: + url, file_size, filename = line.strip().split(None, 2) + + if self._path_is_dangerous(filename): + raise BagError( + _('Path "%(payload_file)s" in "%(source_file)s" is unsafe') + % { + "payload_file": filename, + "source_file": os.path.join(self.path, "fetch.txt"), + } + ) + + yield url, file_size, filename + + def files_to_be_fetched(self): + """ + Convenience wrapper for fetch_entries which returns only the + local filename + """ + + for url, file_size, filename in self.fetch_entries(): + yield filename + + def has_oxum(self): + return "Payload-Oxum" in self.info + + def validate(self, processes=1, fast=False, completeness_only=False): + """Checks the structure and contents are valid. + + If you supply the parameter fast=True the Payload-Oxum (if present) will + be used to check that the payload files are present and accounted for, + instead of re-calculating fixities and comparing them against the + manifest. By default validate() will re-calculate fixities (fast=False). + """ + + self._validate_structure() + self._validate_bagittxt() + + self.validate_fetch() + + self._validate_contents( + processes=processes, fast=fast, completeness_only=completeness_only + ) + + return True + + def is_valid(self, fast=False, completeness_only=False): + """Returns validation success or failure as boolean. + Optional fast parameter passed directly to validate(). + """ + + try: + self.validate(fast=fast, completeness_only=completeness_only) + except BagError: + return False + + return True + + def _load_manifests(self): + self.entries = {} + manifests = list(self.manifest_files()) + + if self.version_info >= (0, 97): + # v0.97+ requires that optional tagfiles are verified. + manifests += list(self.tagmanifest_files()) + + for manifest_filename in manifests: + if manifest_filename.find("tagmanifest-") != -1: + search = "tagmanifest-" + else: + search = "manifest-" + alg = ( + os.path.basename(manifest_filename) + .replace(search, "") + .replace(".txt", "") + ) + if alg not in self.algorithms: + self.algorithms.append(alg) + + with open_text_file( + manifest_filename, "r", encoding=self.encoding + ) as manifest_file: + if manifest_file.encoding.startswith("UTF"): + # We'll check the first character to see if it's a BOM: + if manifest_file.read(1) == UNICODE_BYTE_ORDER_MARK: + # We'll skip it either way by letting line decoding + # happen at the new offset but we will issue a warning + # for UTF-8 since the presence of a BOM is contrary to + # the BagIt specification: + if manifest_file.encoding == "UTF-8": + LOGGER.warning( + _( + "%s is encoded using UTF-8 but contains an unnecessary" + " byte-order mark, which is not in compliance with the" + " BagIt RFC" + ), + manifest_file.name, + ) + else: + manifest_file.seek(0) # Pretend the first read never happened + + for line in manifest_file: + line = line.strip() + + # Ignore blank lines and comments. + if line == "" or line.startswith("#"): + continue + + entry = line.split(None, 1) + + # Format is FILENAME *CHECKSUM + if len(entry) != 2: + LOGGER.error( + _( + "%(bag)s: Invalid %(algorithm)s manifest entry: %(line)s" + ), + {"bag": self, "algorithm": alg, "line": line}, + ) + continue + + entry_hash = entry[0] + entry_path = os.path.normpath(entry[1].lstrip("*")) + entry_path = _decode_filename(entry_path) + + if self._path_is_dangerous(entry_path): + raise BagError( + _( + 'Path "%(payload_file)s" in manifest "%(manifest_file)s" is unsafe' + ) + % { + "payload_file": entry_path, + "manifest_file": manifest_file.name, + } + ) + + entry_hashes = self.entries.setdefault(entry_path, {}) + + if alg in entry_hashes: + warning_ctx = { + "bag": self, + "algorithm": alg, + "filename": entry_path, + } + if entry_hashes[alg] == entry_hash: + msg = _( + "%(bag)s: %(algorithm)s manifest lists %(filename)s" + " multiple times with the same value" + ) + if self.version_info >= (1,): + raise BagError(msg % warning_ctx) + else: + LOGGER.warning(msg, warning_ctx) + else: + raise BagError( + _( + "%(bag)s: %(algorithm)s manifest lists %(filename)s" + " multiple times with conflicting values" + ) + % warning_ctx + ) + + entry_hashes[alg] = entry_hash + + self.normalized_manifest_names.update( + (normalize_unicode(i), i) for i in self.entries.keys() + ) + + def _validate_structure(self): + """ + Checks the structure of the bag to determine whether it conforms to the + BagIt spec. Returns true on success, otherwise it will raise a + BagValidationError exception. + """ + + self._validate_structure_payload_directory() + self._validate_structure_tag_files() + + def _validate_structure_payload_directory(self): + data_dir_path = os.path.join(self.path, "data") + + if not isdir(data_dir_path): + raise BagValidationError( + _("Expected data directory %s does not exist") % data_dir_path + ) + + def _validate_structure_tag_files(self): + # Note: we deviate somewhat from v0.96 of the spec in that it allows + # other files and directories to be present in the base directory + + if not list(self.manifest_files()): + raise BagValidationError(_("No manifest files found")) + if "bagit.txt" not in os.listdir(self.path): + raise BagValidationError( + _('Expected %s to contain "bagit.txt"') % self.path + ) + + def validate_fetch(self): + """Validate the fetch.txt file + + Raises `BagError` for errors and otherwise returns no value + """ + + for url, file_size, filename in self.fetch_entries(): + # fetch_entries will raise a BagError for unsafe filenames + # so at this point we will check only that the URL is minimally + # well formed: + parsed_url = urlparse(url) + + if not all((parsed_url.scheme, parsed_url.netloc)): + raise BagError(_("Malformed URL in fetch.txt: %s") % url) + + def _validate_contents(self, processes=1, fast=False, completeness_only=False): + if fast and not self.has_oxum(): + raise BagValidationError( + _("Fast validation requires bag-info.txt to include Payload-Oxum") + ) + + # Perform the fast file count + size check so we can fail early: + self._validate_oxum() + + if fast: + return + + self._validate_completeness() + + if completeness_only: + return + + self._validate_entries(processes) + + def _validate_oxum(self): + oxum = self.info.get("Payload-Oxum") + + if oxum is None: + return + + # If multiple Payload-Oxum tags (bad idea) + # use the first listed in bag-info.txt + if isinstance(oxum, list): + LOGGER.warning(_("bag-info.txt defines multiple Payload-Oxum values!")) + oxum = oxum[0] + + oxum_byte_count, oxum_file_count = oxum.split(".", 1) + + if not oxum_byte_count.isdigit() or not oxum_file_count.isdigit(): + raise BagError(_("Malformed Payload-Oxum value: %s") % oxum) + + oxum_byte_count = int(oxum_byte_count) + oxum_file_count = int(oxum_file_count) + total_bytes = 0 + total_files = 0 + + for payload_file in self.payload_files(): + payload_file = os.path.join(self.path, payload_file) + total_bytes += os.stat(payload_file).st_size + total_files += 1 + + if oxum_file_count != total_files or oxum_byte_count != total_bytes: + raise BagValidationError( + _( + "Payload-Oxum validation failed." + " Expected %(oxum_file_count)d files and %(oxum_byte_count)d bytes" + " but found %(found_file_count)d files and %(found_byte_count)d bytes" + ) + % { + "found_file_count": total_files, + "found_byte_count": total_bytes, + "oxum_file_count": oxum_file_count, + "oxum_byte_count": oxum_byte_count, + } + ) + + def _validate_completeness(self): + """ + Verify that the actual file manifests match the files in the data directory + """ + errors = list() + + # First we'll make sure there's no mismatch between the filesystem + # and the list of files in the manifest(s) + only_in_manifests, only_on_fs = self.compare_manifests_with_fs() + for path in only_in_manifests: + e = FileMissing(path) + LOGGER.warning(force_unicode(e)) + errors.append(e) + for path in only_on_fs: + e = UnexpectedFile(path) + LOGGER.warning(force_unicode(e)) + errors.append(e) + + if errors: + raise BagValidationError(_("Bag validation failed"), errors) + + def _validate_entries(self, processes): + """ + Verify that the actual file contents match the recorded hashes stored in the manifest files + """ + errors = list() + + if os.name == "posix": + worker_init = posix_multiprocessing_worker_initializer + else: + worker_init = None + + args = ( + ( + self.path, + self.normalized_filesystem_names.get(rel_path, rel_path), + hashes, + self.algorithms, + ) + for rel_path, hashes in self.entries.items() + ) + + try: + if processes == 1: + hash_results = [_calc_hashes(i) for i in args] + else: + try: + pool = multiprocessing.Pool( + processes if processes else None, initializer=worker_init + ) + hash_results = pool.map(_calc_hashes, args) + finally: + pool.terminate() + + # Any unhandled exceptions are probably fatal + except: + LOGGER.exception(_("Unable to calculate file hashes for %s"), self) + raise + + for rel_path, f_hashes, hashes in hash_results: + for alg, computed_hash in f_hashes.items(): + stored_hash = hashes[alg] + if stored_hash.lower() != computed_hash: + e = ChecksumMismatch( + rel_path, alg, stored_hash.lower(), computed_hash + ) + LOGGER.warning(force_unicode(e)) + errors.append(e) + + if errors: + raise BagValidationError(_("Bag validation failed"), errors) + + def _validate_bagittxt(self): + """ + Verify that bagit.txt conforms to specification + """ + bagit_file_path = os.path.join(self.path, "bagit.txt") + + # Note that we are intentionally opening this file in binary mode so we can confirm + # that it does not start with the UTF-8 byte-order-mark + with open(bagit_file_path, "rb") as bagit_file: + first_line = bagit_file.read(4) + if first_line.startswith(codecs.BOM_UTF8): + raise BagValidationError( + _("bagit.txt must not contain a byte-order mark") + ) + + def _path_is_dangerous(self, path): + """ + Return true if path looks dangerous, i.e. potentially operates + outside the bagging directory structure, e.g. ~/.bashrc, ../../../secrets.json, + \\?\c:\, D:\sys32\cmd.exe + """ + if os.path.isabs(path): + return True + if os.path.expanduser(path) != path: + return True + if os.path.expandvars(path) != path: + return True + real_path = os.path.realpath(os.path.join(self.path, path)) + real_path = os.path.normpath(real_path) + bag_path = os.path.realpath(self.path) + bag_path = os.path.normpath(bag_path) + common = os.path.commonprefix((bag_path, real_path)) + return not (common == bag_path) + + +class BagError(Exception): + pass + + +class BagValidationError(BagError): + def __init__(self, message, details=None): + super(BagValidationError, self).__init__() + + if details is None: + details = [] + + self.message = message + self.details = details + + def __str__(self): + if len(self.details) > 0: + details = "; ".join([force_unicode(e) for e in self.details]) + return "%s: %s" % (self.message, details) + return self.message + + +class ManifestErrorDetail(BagError): + def __init__(self, path): + super(ManifestErrorDetail, self).__init__() + + self.path = path + + +class ChecksumMismatch(ManifestErrorDetail): + def __init__(self, path, algorithm=None, expected=None, found=None): + super(ChecksumMismatch, self).__init__(path) + + self.path = path + self.algorithm = algorithm + self.expected = expected + self.found = found + + def __str__(self): + return _( + '%(path)s %(algorithm)s validation failed: expected="%(expected)s" found="%(found)s"' + ) % { + "path": force_unicode(self.path), + "algorithm": self.algorithm, + "expected": self.expected, + "found": self.found, + } + + +class FileMissing(ManifestErrorDetail): + def __str__(self): + return _( + "%s exists in manifest but was not found on filesystem" + ) % force_unicode(self.path) + + +class UnexpectedFile(ManifestErrorDetail): + def __str__(self): + return _("%s exists on filesystem but is not in the manifest") % self.path + + +class FileNormalizationConflict(BagError): + """ + Exception raised when two files differ only in normalization and thus + are not safely portable + """ + + def __init__(self, file_a, file_b): + super(FileNormalizationConflict, self).__init__() + + self.file_a = file_a + self.file_b = file_b + + def __str__(self): + return _( + 'Unicode normalization conflict for file "%(file_a)s" and "%(file_b)s"' + ) % {"file_a": self.file_a, "file_b": self.file_b} + + +def posix_multiprocessing_worker_initializer(): + """Ignore SIGINT in multiprocessing workers on POSIX systems""" + signal.signal(signal.SIGINT, signal.SIG_IGN) + + +# The Unicode normalization form used here doesn't matter – all we care about +# is consistency since the input value will be preserved: + + +def normalize_unicode_py3(s): + return unicodedata.normalize("NFC", s) + + +def normalize_unicode_py2(s): + if isinstance(s, str): + s = s.decode("utf-8") + return unicodedata.normalize("NFC", s) + + +if sys.version_info > (3, 0): + normalize_unicode = normalize_unicode_py3 +else: + normalize_unicode = normalize_unicode_py2 + + +def build_unicode_normalized_lookup_dict(filenames): + """ + Return a dictionary mapping unicode-normalized filenames to as-encoded + values to efficiently detect conflicts between the filesystem and manifests. + + This is necessary because some filesystems and utilities may automatically + apply a different Unicode normalization form to filenames than was applied + when the bag was originally created. + + The best known example of this is when a bag is created using a + normalization form other than NFD and then transferred to a Mac where the + HFS+ filesystem will transparently normalize filenames to a variant of NFD + for every call: + + https://developer.apple.com/legacy/library/technotes/tn/tn1150.html#UnicodeSubtleties + + Windows is documented as storing filenames exactly as provided: + + https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247%28v=vs.85%29.aspx + + Linux performs no normalization in the kernel but it is technically + valid for a filesystem to perform normalization, such as when an HFS+ + volume is mounted. + + See http://www.unicode.org/reports/tr15/ for a full discussion of + equivalence and normalization in Unicode. + """ + + output = dict() + + for filename in filenames: + normalized_filename = normalize_unicode(filename) + if normalized_filename in output: + raise FileNormalizationConflict(filename, output[normalized_filename]) + else: + output[normalized_filename] = filename + + return output + + +def get_hashers(algorithms): + """ + Given a list of algorithm names, return a dictionary of hasher instances + + This avoids redundant code between the creation and validation code where in + both cases we want to avoid reading the same file more than once. The + intended use is a simple for loop: + + for block in file: + for hasher in hashers.values(): + hasher.update(block) + """ + + hashers = {} + + for alg in algorithms: + try: + hasher = hashlib.new(alg) + except ValueError: + LOGGER.warning( + _("Disabling requested hash algorithm %s: hashlib does not support it"), + alg, + ) + continue + + hashers[alg] = hasher + + if not hashers: + raise ValueError( + _( + "Unable to continue: hashlib does not support any of the requested algorithms!" + ) + ) + + return hashers + + +def _calc_hashes(args): + # auto unpacking of sequences illegal in Python3 + (base_path, rel_path, hashes, algorithms) = args + full_path = os.path.join(base_path, rel_path) + + # Create a clone of the default empty hash objects: + f_hashers = dict((alg, hashlib.new(alg)) for alg in hashes if alg in algorithms) + + try: + f_hashes = _calculate_file_hashes(full_path, f_hashers) + except BagValidationError as e: + f_hashes = dict((alg, force_unicode(e)) for alg in f_hashers.keys()) + + return rel_path, f_hashes, hashes + + +def _calculate_file_hashes(full_path, f_hashers): + """ + Returns a dictionary of (algorithm, hexdigest) values for the provided + filename + """ + LOGGER.info(_("Verifying checksum for file %s"), full_path) + + try: + with open(full_path, "rb") as f: + while True: + block = f.read(HASH_BLOCK_SIZE) + if not block: + break + for i in f_hashers.values(): + i.update(block) + except (OSError, IOError) as e: + raise BagValidationError( + _("Could not read %(filename)s: %(error)s") + % {"filename": full_path, "error": force_unicode(e)} + ) + + return dict((alg, h.hexdigest()) for alg, h in f_hashers.items()) + + +def _load_tag_file(tag_file_name, encoding="utf-8-sig"): + with open_text_file(tag_file_name, "r", encoding=encoding) as tag_file: + # Store duplicate tags as list of vals + # in order of parsing under the same key. + tags = {} + for name, value in _parse_tags(tag_file): + if name not in tags: + tags[name] = value + continue + + if not isinstance(tags[name], list): + tags[name] = [tags[name], value] + else: + tags[name].append(value) + + return tags + + +def _parse_tags(tag_file): + """Parses a tag file, according to RFC 2822. This + includes line folding, permitting extra-long + field values. + + See http://www.faqs.org/rfcs/rfc2822.html for + more information. + """ + + tag_name = None + tag_value = None + + # Line folding is handled by yielding values only after we encounter + # the start of a new tag, or if we pass the EOF. + for num, line in enumerate(tag_file): + # Skip over any empty or blank lines. + if len(line) == 0 or line.isspace(): + continue + elif line[0].isspace() and tag_value is not None: # folded line + tag_value += line + else: + # Starting a new tag; yield the last one. + if tag_name: + yield (tag_name, tag_value.strip()) + + if ":" not in line: + raise BagValidationError( + _("%(filename)s contains invalid tag: %(line)s") + % { + "line": line.strip(), + "filename": os.path.basename(tag_file.name), + } + ) + + parts = line.strip().split(":", 1) + tag_name = parts[0].strip() + tag_value = parts[1] + + # Passed the EOF. All done after this. + if tag_name: + yield (tag_name, tag_value.strip()) + + +def _make_tag_file(bag_info_path, bag_info): + headers = sorted(bag_info.keys()) + with open_text_file(bag_info_path, "w") as f: + for h in headers: + values = bag_info[h] + if not isinstance(values, list): + values = [values] + for txt in values: + # strip CR, LF and CRLF so they don't mess up the tag file + txt = re.sub(r"\n|\r|(\r\n)", "", force_unicode(txt)) + f.write("%s: %s\n" % (h, txt)) + + +def make_manifests(data_dir, processes, algorithms=DEFAULT_CHECKSUMS, encoding="utf-8"): + LOGGER.info( + _("Using %(process_count)d processes to generate manifests: %(algorithms)s"), + {"process_count": processes, "algorithms": ", ".join(algorithms)}, + ) + + manifest_line_generator = partial(generate_manifest_lines, algorithms=algorithms) + + if processes > 1: + pool = multiprocessing.Pool(processes=processes) + checksums = pool.map(manifest_line_generator, _walk(data_dir)) + pool.close() + pool.join() + else: + checksums = [manifest_line_generator(i) for i in _walk(data_dir)] + + # At this point we have a list of tuples which start with the algorithm name: + manifest_data = {} + for batch in checksums: + for entry in batch: + manifest_data.setdefault(entry[0], []).append(entry[1:]) + + # These will be keyed on the algorithm name so we can perform sanity checks + # below to catch failures in the hashing process: + num_files = defaultdict(lambda: 0) + total_bytes = defaultdict(lambda: 0) + + for algorithm, values in manifest_data.items(): + manifest_filename = "manifest-%s.txt" % algorithm + + with open_text_file(manifest_filename, "w", encoding=encoding) as manifest: + for digest, filename, byte_count in values: + manifest.write("%s %s\n" % (digest, _encode_filename(filename))) + num_files[algorithm] += 1 + total_bytes[algorithm] += byte_count + + # We'll use sets of the values for the error checks and eventually return the payload oxum values: + byte_value_set = set(total_bytes.values()) + file_count_set = set(num_files.values()) + + # allow a bag with an empty payload + if not byte_value_set and not file_count_set: + return 0, 0 + + if len(file_count_set) != 1: + raise RuntimeError(_("Expected the same number of files for each checksum")) + + if len(byte_value_set) != 1: + raise RuntimeError(_("Expected the same number of bytes for each checksums")) + + return byte_value_set.pop(), file_count_set.pop() + + +def _make_tagmanifest_file(alg, bag_dir, encoding="utf-8"): + tagmanifest_file = join(bag_dir, "tagmanifest-%s.txt" % alg) + LOGGER.info(_("Creating %s"), tagmanifest_file) + + checksums = [] + for f in _find_tag_files(bag_dir): + if re.match(r"^tagmanifest-.+\.txt$", f): + continue + with open(join(bag_dir, f), "rb") as fh: + m = hashlib.new(alg) + while True: + block = fh.read(HASH_BLOCK_SIZE) + if not block: + break + m.update(block) + checksums.append((m.hexdigest(), f)) + + with open_text_file( + join(bag_dir, tagmanifest_file), mode="w", encoding=encoding + ) as tagmanifest: + for digest, filename in checksums: + tagmanifest.write("%s %s\n" % (digest, filename)) + + +def _find_tag_files(bag_dir): + for dir in os.listdir(bag_dir): + if dir != "data": + if os.path.isfile(dir) and not dir.startswith("tagmanifest-"): + yield dir + for dir_name, _, filenames in os.walk(dir): + for filename in filenames: + if filename.startswith("tagmanifest-"): + continue + # remove everything up to the bag_dir directory + p = join(dir_name, filename) + yield os.path.relpath(p, bag_dir) + + +def _walk(data_dir): + for dirpath, dirnames, filenames in os.walk(data_dir): + # if we don't sort here the order of entries is non-deterministic + # which makes it hard to test the fixity of tagmanifest-md5.txt + filenames.sort() + dirnames.sort() + for fn in filenames: + path = os.path.join(dirpath, fn) + # BagIt spec requires manifest to always use '/' as path separator + if os.path.sep != "/": + parts = path.split(os.path.sep) + path = "/".join(parts) + yield path + + +def _can_bag(test_dir): + """Scan the provided directory for files which cannot be bagged due to insufficient permissions""" + unbaggable = [] + + if not os.access(test_dir, os.R_OK): + # We cannot continue without permission to read the source directory + unbaggable.append(test_dir) + return unbaggable + + if not os.access(test_dir, os.W_OK): + unbaggable.append(test_dir) + + for dirpath, dirnames, filenames in os.walk(test_dir): + for directory in dirnames: + full_path = os.path.join(dirpath, directory) + if not os.access(full_path, os.W_OK): + unbaggable.append(full_path) + + return unbaggable + + +def _can_read(test_dir): + """ + returns ((unreadable_dirs), (unreadable_files)) + """ + unreadable_dirs = [] + unreadable_files = [] + + if not os.access(test_dir, os.R_OK): + unreadable_dirs.append(test_dir) + else: + for dirpath, dirnames, filenames in os.walk(test_dir): + for dn in dirnames: + full_path = os.path.join(dirpath, dn) + if not os.access(full_path, os.R_OK): + unreadable_dirs.append(full_path) + for fn in filenames: + full_path = os.path.join(dirpath, fn) + if not os.access(full_path, os.R_OK): + unreadable_files.append(full_path) + return (tuple(unreadable_dirs), tuple(unreadable_files)) + + +def generate_manifest_lines(filename, algorithms=DEFAULT_CHECKSUMS): + LOGGER.info(_("Generating manifest lines for file %s"), filename) + + # For performance we'll read the file only once and pass it block + # by block to every requested hash algorithm: + hashers = get_hashers(algorithms) + + total_bytes = 0 + + with open(filename, "rb") as f: + while True: + block = f.read(HASH_BLOCK_SIZE) + + if not block: + break + + total_bytes += len(block) + for hasher in hashers.values(): + hasher.update(block) + + decoded_filename = _decode_filename(filename) + + # We'll generate a list of results in roughly manifest format but prefixed with the algorithm: + results = [ + (alg, hasher.hexdigest(), decoded_filename, total_bytes) + for alg, hasher in hashers.items() + ] + + return results + + +def _encode_filename(s): + s = s.replace("\r", "%0D") + s = s.replace("\n", "%0A") + return s + + +def _decode_filename(s): + s = re.sub(r"%0D", "\r", s, re.IGNORECASE) + s = re.sub(r"%0A", "\n", s, re.IGNORECASE) + return s + + +def force_unicode_py2(s): + """Reliably return a Unicode string given a possible unicode or byte string""" + if isinstance(s, str): + return s.decode("utf-8") + else: + return unicode(s) + + +if sys.version_info > (3, 0): + force_unicode = str +else: + force_unicode = force_unicode_py2 + +# following code is used for command line program + + +class BagArgumentParser(argparse.ArgumentParser): + def __init__(self, *args, **kwargs): + argparse.ArgumentParser.__init__(self, *args, **kwargs) + self.set_defaults(bag_info={}) + + +class BagHeaderAction(argparse.Action): + def __call__(self, parser, namespace, values, option_string=None): + opt = option_string.lstrip("--") + opt_caps = "-".join([o.capitalize() for o in opt.split("-")]) + namespace.bag_info[opt_caps] = values + + +def _make_parser(): + parser = BagArgumentParser( + formatter_class=argparse.RawDescriptionHelpFormatter, + description="bagit-python version %s\n\n%s\n" % (VERSION, __doc__.strip()), + ) + parser.add_argument( + "--processes", + type=int, + dest="processes", + default=1, + help=_( + "Use multiple processes to calculate checksums faster (default: %(default)s)" + ), + ) + parser.add_argument("--log", help=_("The name of the log file (default: stdout)")) + parser.add_argument( + "--quiet", + action="store_true", + help=_("Suppress all progress information other than errors"), + ) + parser.add_argument( + "--validate", + action="store_true", + help=_( + "Validate existing bags in the provided directories instead of" + " creating new ones" + ), + ) + parser.add_argument( + "--fast", + action="store_true", + help=_( + "Modify --validate behaviour to only test whether the bag directory" + " has the number of files and total size specified in Payload-Oxum" + " without performing checksum validation to detect corruption." + ), + ) + parser.add_argument( + "--completeness-only", + action="store_true", + help=_( + "Modify --validate behaviour to test whether the bag directory" + " has the expected payload specified in the checksum manifests" + " without performing checksum validation to detect corruption." + ), + ) + + checksum_args = parser.add_argument_group( + _("Checksum Algorithms"), + _( + "Select the manifest algorithms to be used when creating bags" + " (default=%s)" + ) + % ", ".join(DEFAULT_CHECKSUMS), + ) + + for i in CHECKSUM_ALGOS: + alg_name = re.sub(r"^([A-Z]+)(\d+)$", r"\1-\2", i.upper()) + checksum_args.add_argument( + "--%s" % i, + action="append_const", + dest="checksums", + const=i, + help=_("Generate %s manifest when creating a bag") % alg_name, + ) + + metadata_args = parser.add_argument_group(_("Optional Bag Metadata")) + for header in STANDARD_BAG_INFO_HEADERS: + metadata_args.add_argument( + "--%s" % header.lower(), type=str, action=BagHeaderAction, default=argparse.SUPPRESS + ) + + parser.add_argument( + "directory", + nargs="+", + help=_( + "Directory which will be converted into a bag in place" + " by moving any existing files into the BagIt structure" + " and creating the manifests and other metadata." + ), + ) + + return parser + + +def _configure_logging(opts): + log_format = "%(asctime)s - %(levelname)s - %(message)s" + if opts.quiet: + level = logging.ERROR + else: + level = logging.INFO + if opts.log: + logging.basicConfig(filename=opts.log, level=level, format=log_format) + else: + logging.basicConfig(level=level, format=log_format) + + +def main(): + if "--version" in sys.argv: + print(_("bagit-python version %s") % VERSION) + sys.exit(0) + + parser = _make_parser() + args = parser.parse_args() + + if args.processes < 0: + parser.error(_("The number of processes must be 0 or greater")) + + if args.fast and not args.validate: + parser.error(_("--fast is only allowed as an option for --validate!")) + + _configure_logging(args) + + rc = 0 + for bag_dir in args.directory: + # validate the bag + if args.validate: + try: + bag = Bag(bag_dir) + # validate throws a BagError or BagValidationError + bag.validate( + processes=args.processes, + fast=args.fast, + completeness_only=args.completeness_only, + ) + if args.fast: + LOGGER.info(_("%s valid according to Payload-Oxum"), bag_dir) + else: + LOGGER.info(_("%s is valid"), bag_dir) + except BagError as e: + LOGGER.error( + _("%(bag)s is invalid: %(error)s"), {"bag": bag_dir, "error": e} + ) + rc = 1 + + # make the bag + else: + try: + make_bag( + bag_dir, + bag_info=args.bag_info, + processes=args.processes, + checksums=args.checksums, + ) + except Exception as exc: + LOGGER.error( + _("Failed to create bag in %(bag_directory)s: %(error)s"), + {"bag_directory": bag_dir, "error": exc}, + exc_info=True, + ) + rc = 1 + + sys.exit(rc) + + +if __name__ == "__main__": + main() diff -r 000000000000 -r 4f3585e2f14b env/bin/bioblend-galaxy-tests --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/bioblend-galaxy-tests Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from bioblend._tests.pytest_galaxy_test_wrapper import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/bundle_image --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/bundle_image Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,27 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +from boto.manage.server import Server +if __name__ == "__main__": + from optparse import OptionParser + parser = OptionParser(version="%prog 1.0", usage="Usage: %prog [options] instance-id [instance-id-2]") + + # Commands + parser.add_option("-b", "--bucket", help="Destination Bucket", dest="bucket", default=None) + parser.add_option("-p", "--prefix", help="AMI Prefix", dest="prefix", default=None) + parser.add_option("-k", "--key", help="Private Key File", dest="key_file", default=None) + parser.add_option("-c", "--cert", help="Public Certificate File", dest="cert_file", default=None) + parser.add_option("-s", "--size", help="AMI Size", dest="size", default=None) + parser.add_option("-i", "--ssh-key", help="SSH Keyfile", dest="ssh_key", default=None) + parser.add_option("-u", "--user-name", help="SSH Username", dest="uname", default="root") + parser.add_option("-n", "--name", help="Name of Image", dest="name") + (options, args) = parser.parse_args() + + for instance_id in args: + try: + s = Server.find(instance_id=instance_id).next() + print "Found old server object" + except StopIteration: + print "New Server Object Created" + s = Server.create_from_instance_id(instance_id, options.name) + assert(s.hostname is not None) + b = s.get_bundler(uname=options.uname) + b.bundle(bucket=options.bucket,prefix=options.prefix,key_file=options.key_file,cert_file=options.cert_file,size=int(options.size),ssh_key=options.ssh_key) diff -r 000000000000 -r 4f3585e2f14b env/bin/cfadmin --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/cfadmin Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,108 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# Author: Chris Moyer +# +# cfadmin is similar to sdbadmin for CloudFront, it's a simple +# console utility to perform the most frequent tasks with CloudFront +# +def _print_distributions(dists): + """Internal function to print out all the distributions provided""" + print "%-12s %-50s %s" % ("Status", "Domain Name", "Origin") + print "-"*80 + for d in dists: + print "%-12s %-50s %-30s" % (d.status, d.domain_name, d.origin) + for cname in d.cnames: + print " "*12, "CNAME => %s" % cname + print "" + +def help(cf, fnc=None): + """Print help message, optionally about a specific function""" + import inspect + self = sys.modules['__main__'] + if fnc: + try: + cmd = getattr(self, fnc) + except: + cmd = None + if not inspect.isfunction(cmd): + print "No function named: %s found" % fnc + sys.exit(2) + (args, varargs, varkw, defaults) = inspect.getargspec(cmd) + print cmd.__doc__ + print "Usage: %s %s" % (fnc, " ".join([ "[%s]" % a for a in args[1:]])) + else: + print "Usage: cfadmin [command]" + for cname in dir(self): + if not cname.startswith("_"): + cmd = getattr(self, cname) + if inspect.isfunction(cmd): + doc = cmd.__doc__ + print "\t%s - %s" % (cname, doc) + sys.exit(1) + +def ls(cf): + """List all distributions and streaming distributions""" + print "Standard Distributions" + _print_distributions(cf.get_all_distributions()) + print "Streaming Distributions" + _print_distributions(cf.get_all_streaming_distributions()) + +def invalidate(cf, origin_or_id, *paths): + """Create a cloudfront invalidation request""" + # Allow paths to be passed using stdin + if not paths: + paths = [] + for path in sys.stdin.readlines(): + path = path.strip() + if path: + paths.append(path) + dist = None + for d in cf.get_all_distributions(): + if d.id == origin_or_id or d.origin.dns_name == origin_or_id: + dist = d + break + if not dist: + print "Distribution not found: %s" % origin_or_id + sys.exit(1) + cf.create_invalidation_request(dist.id, paths) + +def listinvalidations(cf, origin_or_id): + """List invalidation requests for a given origin""" + dist = None + for d in cf.get_all_distributions(): + if d.id == origin_or_id or d.origin.dns_name == origin_or_id: + dist = d + break + if not dist: + print "Distribution not found: %s" % origin_or_id + sys.exit(1) + results = cf.get_invalidation_requests(dist.id) + if results: + for result in results: + if result.status == "InProgress": + result = result.get_invalidation_request() + print result.id, result.status, result.paths + else: + print result.id, result.status + + +if __name__ == "__main__": + import boto + import sys + cf = boto.connect_cloudfront() + self = sys.modules['__main__'] + if len(sys.argv) >= 2: + try: + cmd = getattr(self, sys.argv[1]) + except: + cmd = None + args = sys.argv[2:] + else: + cmd = help + args = [] + if not cmd: + cmd = help + try: + cmd(cf, *args) + except TypeError as e: + print e + help(cf, cmd.__name__) diff -r 000000000000 -r 4f3585e2f14b env/bin/chardetect --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/chardetect Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from chardet.cli.chardetect import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/coloredlogs --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/coloredlogs Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from coloredlogs.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/cq --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/cq Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,92 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/ +# +# Permission is hereby granted, free of charge, to any person obtaining a +# copy of this software and associated documentation files (the +# "Software"), to deal in the Software without restriction, including +# without limitation the rights to use, copy, modify, merge, publish, dis- +# tribute, sublicense, and/or sell copies of the Software, and to permit +# persons to whom the Software is furnished to do so, subject to the fol- +# lowing conditions: +# +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- +# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS +# IN THE SOFTWARE. +# +import getopt, sys +import boto.sqs +from boto.sqs.connection import SQSConnection +from boto.exception import SQSError + +def usage(): + print 'cq [-c] [-q queue_name] [-o output_file] [-t timeout] [-r region]' + +def main(): + try: + opts, args = getopt.getopt(sys.argv[1:], 'hcq:o:t:r:', + ['help', 'clear', 'queue=', + 'output=', 'timeout=', 'region=']) + except: + usage() + sys.exit(2) + queue_name = '' + output_file = '' + timeout = 30 + region = '' + clear = False + for o, a in opts: + if o in ('-h', '--help'): + usage() + sys.exit() + if o in ('-q', '--queue'): + queue_name = a + if o in ('-o', '--output'): + output_file = a + if o in ('-c', '--clear'): + clear = True + if o in ('-t', '--timeout'): + timeout = int(a) + if o in ('-r', '--region'): + region = a + if region: + c = boto.sqs.connect_to_region(region) + if c is None: + print 'Invalid region (%s)' % region + sys.exit(1) + else: + c = SQSConnection() + if queue_name: + try: + rs = [c.create_queue(queue_name)] + except SQSError as e: + print 'An Error Occurred:' + print '%s: %s' % (e.status, e.reason) + print e.body + sys.exit() + else: + try: + rs = c.get_all_queues() + except SQSError as e: + print 'An Error Occurred:' + print '%s: %s' % (e.status, e.reason) + print e.body + sys.exit() + for q in rs: + if clear: + n = q.clear() + print 'clearing %d messages from %s' % (n, q.id) + elif output_file: + q.dump(output_file) + else: + print q.id, q.count(vtimeout=timeout) + +if __name__ == "__main__": + main() + diff -r 000000000000 -r 4f3585e2f14b env/bin/csv2rdf --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/csv2rdf Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from rdflib.tools.csv2rdf import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/cwltool --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/cwltool Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from cwltool.main import run +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(run()) diff -r 000000000000 -r 4f3585e2f14b env/bin/cwutil --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/cwutil Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,140 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# Author: Chris Moyer +# Description: CloudWatch Utility +# For listing stats, creating alarms, and managing +# other CloudWatch aspects + +import boto +cw = boto.connect_cloudwatch() + +from datetime import datetime, timedelta + +def _parse_time(time_string): + """Internal function to parse a time string""" + +def _parse_dict(d_string): + result = {} + if d_string: + for d in d_string.split(","): + d = d.split(":") + result[d[0]] = d[1] + return result + +def ls(namespace=None): + """ + List metrics, optionally filtering by a specific namespace + namespace: Optional Namespace to filter on + """ + print "%-10s %-50s %s" % ("Namespace", "Metric Name", "Dimensions") + print "-"*80 + for m in cw.list_metrics(): + if namespace is None or namespace.upper() in m.namespace: + print "%-10s %-50s %s" % (m.namespace, m.name, m.dimensions) + +def stats(namespace, metric_name, dimensions=None, statistics="Average", start_time=None, end_time=None, period=60, unit=None): + """ + Lists the statistics for a specific metric + namespace: The namespace to use, usually "AWS/EC2", "AWS/SQS", etc. + metric_name: The name of the metric to track, pulled from `ls` + dimensions: The dimensions to use, formatted as Name:Value (such as QueueName:myQueue) + statistics: The statistics to measure, defaults to "Average" + 'Minimum', 'Maximum', 'Sum', 'Average', 'SampleCount' + start_time: Start time, default to now - 1 day + end_time: End time, default to now + period: Period/interval for counts, default to 60 minutes + unit: Unit to track, default depends on what metric is being tracked + """ + + # Parse the dimensions + dimensions = _parse_dict(dimensions) + + # Parse the times + if end_time: + end_time = _parse_time(end_time) + else: + end_time = datetime.utcnow() + if start_time: + start_time = _parse_time(start_time) + else: + start_time = datetime.utcnow() - timedelta(days=1) + + print "%-30s %s" % ('Timestamp', statistics) + print "-"*50 + data = {} + for m in cw.get_metric_statistics(int(period), start_time, end_time, metric_name, namespace, statistics, dimensions, unit): + data[m['Timestamp']] = m[statistics] + keys = data.keys() + keys.sort() + for k in keys: + print "%-30s %s" % (k, data[k]) + +def put(namespace, metric_name, dimensions=None, value=None, unit=None, statistics=None, timestamp=None): + """ + Publish custom metrics + namespace: The namespace to use; values starting with "AWS/" are reserved + metric_name: The name of the metric to update + dimensions: The dimensions to use, formatted as Name:Value (such as QueueName:myQueue) + value: The value to store, mutually exclusive with `statistics` + statistics: The statistics to store, mutually exclusive with `value` + (must specify all of "Minimum", "Maximum", "Sum", "SampleCount") + timestamp: The timestamp of this measurement, default is current server time + unit: Unit to track, default depends on what metric is being tracked + """ + + def simplify(lst): + return lst[0] if len(lst) == 1 else lst + + print cw.put_metric_data(namespace, simplify(metric_name.split(';')), + dimensions = simplify(map(_parse_dict, dimensions.split(';'))) if dimensions else None, + value = simplify(value.split(';')) if value else None, + statistics = simplify(map(_parse_dict, statistics.split(';'))) if statistics else None, + timestamp = simplify(timestamp.split(';')) if timestamp else None, + unit = simplify(unit.split(';')) if unit else None) + +def help(fnc=None): + """ + Print help message, optionally about a specific function + """ + import inspect + self = sys.modules['__main__'] + if fnc: + try: + cmd = getattr(self, fnc) + except: + cmd = None + if not inspect.isfunction(cmd): + print "No function named: %s found" % fnc + sys.exit(2) + (args, varargs, varkw, defaults) = inspect.getargspec(cmd) + print cmd.__doc__ + print "Usage: %s %s" % (fnc, " ".join([ "[%s]" % a for a in args])) + else: + print "Usage: cwutil [command]" + for cname in dir(self): + if not cname.startswith("_") and not cname == "cmd": + cmd = getattr(self, cname) + if inspect.isfunction(cmd): + doc = cmd.__doc__ + print "\t%s - %s" % (cname, doc) + sys.exit(1) + + +if __name__ == "__main__": + import sys + self = sys.modules['__main__'] + if len(sys.argv) >= 2: + try: + cmd = getattr(self, sys.argv[1]) + except: + cmd = None + args = sys.argv[2:] + else: + cmd = help + args = [] + if not cmd: + cmd = help + try: + cmd(*args) + except TypeError as e: + print e + help(cmd.__name__) diff -r 000000000000 -r 4f3585e2f14b env/bin/doesitcache --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/doesitcache Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,33 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# EASY-INSTALL-ENTRY-SCRIPT: 'CacheControl==0.11.7','console_scripts','doesitcache' +import re +import sys + +# for compatibility with easy_install; see #2198 +__requires__ = 'CacheControl==0.11.7' + +try: + from importlib.metadata import distribution +except ImportError: + try: + from importlib_metadata import distribution + except ImportError: + from pkg_resources import load_entry_point + + +def importlib_load_entry_point(spec, group, name): + dist_name, _, _ = spec.partition('==') + matches = ( + entry_point + for entry_point in distribution(dist_name).entry_points + if entry_point.group == group and entry_point.name == name + ) + return next(matches).load() + + +globals().setdefault('load_entry_point', importlib_load_entry_point) + + +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0]) + sys.exit(load_entry_point('CacheControl==0.11.7', 'console_scripts', 'doesitcache')()) diff -r 000000000000 -r 4f3585e2f14b env/bin/dynamodb_dump --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/dynamodb_dump Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,76 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +import argparse +import errno +import os + +import boto +from boto.compat import json +from boto.compat import six + + +DESCRIPTION = """Dump the contents of one or more DynamoDB tables to the local filesystem. + +Each table is dumped into two files: + - {table_name}.metadata stores the table's name, schema and provisioned + throughput. + - {table_name}.data stores the table's actual contents. + +Both files are created in the current directory. To write them somewhere else, +use the --out-dir parameter (the target directory will be created if needed). +""" + + +def dump_table(table, out_dir): + metadata_file = os.path.join(out_dir, "%s.metadata" % table.name) + data_file = os.path.join(out_dir, "%s.data" % table.name) + + with open(metadata_file, "w") as metadata_fd: + json.dump( + { + "name": table.name, + "schema": table.schema.dict, + "read_units": table.read_units, + "write_units": table.write_units, + }, + metadata_fd + ) + + with open(data_file, "w") as data_fd: + for item in table.scan(): + # JSON can't serialize sets -- convert those to lists. + data = {} + for k, v in six.iteritems(item): + if isinstance(v, (set, frozenset)): + data[k] = list(v) + else: + data[k] = v + + data_fd.write(json.dumps(data)) + data_fd.write("\n") + + +def dynamodb_dump(tables, out_dir): + try: + os.makedirs(out_dir) + except OSError as e: + # We don't care if the dir already exists. + if e.errno != errno.EEXIST: + raise + + conn = boto.connect_dynamodb() + for t in tables: + dump_table(conn.get_table(t), out_dir) + + +if __name__ == "__main__": + parser = argparse.ArgumentParser( + prog="dynamodb_dump", + description=DESCRIPTION + ) + parser.add_argument("--out-dir", default=".") + parser.add_argument("tables", metavar="TABLES", nargs="+") + + namespace = parser.parse_args() + + dynamodb_dump(namespace.tables, namespace.out_dir) diff -r 000000000000 -r 4f3585e2f14b env/bin/dynamodb_load --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/dynamodb_load Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,110 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +import argparse +import os + +import boto +from boto.compat import json +from boto.compat import six +from boto.dynamodb.schema import Schema + + +DESCRIPTION = """Load data into one or more DynamoDB tables. + +For each table, data is read from two files: + - {table_name}.metadata for the table's name, schema and provisioned + throughput (only required if creating the table). + - {table_name}.data for the table's actual contents. + +Both files are searched for in the current directory. To read them from +somewhere else, use the --in-dir parameter. + +This program does not wipe the tables prior to loading data. However, any +items present in the data files will overwrite the table's contents. +""" + + +def _json_iterload(fd): + """Lazily load newline-separated JSON objects from a file-like object.""" + buffer = "" + eof = False + while not eof: + try: + # Add a line to the buffer + buffer += fd.next() + except StopIteration: + # We can't let that exception bubble up, otherwise the last + # object in the file will never be decoded. + eof = True + try: + # Try to decode a JSON object. + json_object = json.loads(buffer.strip()) + + # Success: clear the buffer (everything was decoded). + buffer = "" + except ValueError: + if eof and buffer.strip(): + # No more lines to load and the buffer contains something other + # than whitespace: the file is, in fact, malformed. + raise + # We couldn't decode a complete JSON object: load more lines. + continue + + yield json_object + + +def create_table(metadata_fd): + """Create a table from a metadata file-like object.""" + + +def load_table(table, in_fd): + """Load items into a table from a file-like object.""" + for i in _json_iterload(in_fd): + # Convert lists back to sets. + data = {} + for k, v in six.iteritems(i): + if isinstance(v, list): + data[k] = set(v) + else: + data[k] = v + table.new_item(attrs=data).put() + + +def dynamodb_load(tables, in_dir, create_tables): + conn = boto.connect_dynamodb() + for t in tables: + metadata_file = os.path.join(in_dir, "%s.metadata" % t) + data_file = os.path.join(in_dir, "%s.data" % t) + if create_tables: + with open(metadata_file) as meta_fd: + metadata = json.load(meta_fd) + table = conn.create_table( + name=t, + schema=Schema(metadata["schema"]), + read_units=metadata["read_units"], + write_units=metadata["write_units"], + ) + table.refresh(wait_for_active=True) + else: + table = conn.get_table(t) + + with open(data_file) as in_fd: + load_table(table, in_fd) + + +if __name__ == "__main__": + parser = argparse.ArgumentParser( + prog="dynamodb_load", + description=DESCRIPTION + ) + parser.add_argument( + "--create-tables", + action="store_true", + help="Create the tables if they don't exist already (without this flag, attempts to load data into non-existing tables fail)." + ) + parser.add_argument("--in-dir", default=".") + parser.add_argument("tables", metavar="TABLES", nargs="+") + + namespace = parser.parse_args() + + dynamodb_load(namespace.tables, namespace.in_dir, namespace.create_tables) diff -r 000000000000 -r 4f3585e2f14b env/bin/elbadmin --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/elbadmin Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,302 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# Copyright (c) 2009 Chris Moyer http://coredumped.org/ +# +# Permission is hereby granted, free of charge, to any person obtaining a +# copy of this software and associated documentation files (the +# "Software"), to deal in the Software without restriction, including +# without limitation the rights to use, copy, modify, merge, publish, dis- +# tribute, sublicense, and/or sell copies of the Software, and to permit +# persons to whom the Software is furnished to do so, subject to the fol- +# lowing conditions: +# +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- +# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + +# +# Elastic Load Balancer Tool +# +VERSION = "0.2" +usage = """%prog [options] [command] +Commands: + list|ls List all Elastic Load Balancers + delete Delete ELB + get Get all instances associated with + create Create an ELB; -z and -l are required + add Add in ELB + remove|rm Remove from ELB + reap Remove terminated instances from ELB + enable|en Enable Zone for ELB + disable Disable Zone for ELB + addl Add listeners (specified by -l) to the ELB + + rml Remove Listener(s) specified by the port on + the ELB +""" + + +def find_elb(elb, name): + try: + elbs = elb.get_all_load_balancers(name) + except boto.exception.BotoServerError as se: + if se.code == 'LoadBalancerNotFound': + elbs = [] + else: + raise + + if len(elbs) < 1: + print "No load balancer by the name of %s found" % name + return None + elif len(elbs) > 1: + print "More than one elb matches %s?" % name + return None + + # Should not happen + if name not in elbs[0].name: + print "No load balancer by the name of %s found" % name + return None + + return elbs[0] + + +def list(elb): + """List all ELBs""" + print "%-20s %s" % ("Name", "DNS Name") + print "-" * 80 + for b in elb.get_all_load_balancers(): + print "%-20s %s" % (b.name, b.dns_name) + +def check_valid_region(conn, region): + if conn is None: + print 'Invalid region (%s)' % region + sys.exit(1) + +def get(elb, name): + """Get details about ELB """ + + b = find_elb(elb, name) + if b: + print "=" * 80 + print "Name: %s" % b.name + print "DNS Name: %s" % b.dns_name + if b.canonical_hosted_zone_name: + chzn = b.canonical_hosted_zone_name + print "Canonical hosted zone name: %s" % chzn + if b.canonical_hosted_zone_name_id: + chznid = b.canonical_hosted_zone_name_id + print "Canonical hosted zone name id: %s" % chznid + print + + print "Health Check: %s" % b.health_check + print + + print "Listeners" + print "---------" + print "%-8s %-8s %s" % ("IN", "OUT", "PROTO") + for l in b.listeners: + print "%-8s %-8s %s" % (l[0], l[1], l[2]) + + print + + print " Zones " + print "---------" + for z in b.availability_zones: + print z + + print + + # Make map of all instance Id's to Name tags + import boto + from boto.compat.six import iteritems + if not options.region: + ec2 = boto.connect_ec2() + else: + ec2 = boto.ec2.connect_to_region(options.region) + check_valid_region(ec2, options.region) + + instance_health = b.get_instance_health() + instances = [state.instance_id for state in instance_health] + + names = dict((k,'') for k in instances) + for i in ec2.get_only_instances(): + if i.id in instances: + names[i.id] = i.tags.get('Name', '') + + name_column_width = max([4] + [len(v) for k,v in iteritems(names)]) + 2 + + print "Instances" + print "---------" + print "%-12s %-15s %-*s %s" % ("ID", + "STATE", + name_column_width, "NAME", + "DESCRIPTION") + for state in instance_health: + print "%-12s %-15s %-*s %s" % (state.instance_id, + state.state, + name_column_width, names[state.instance_id], + state.description) + + print + + +def create(elb, name, zones, listeners): + """Create an ELB named """ + l_list = [] + for l in listeners: + l = l.split(",") + if l[2] == 'HTTPS': + l_list.append((int(l[0]), int(l[1]), l[2], l[3])) + else: + l_list.append((int(l[0]), int(l[1]), l[2])) + + b = elb.create_load_balancer(name, zones, l_list) + return get(elb, name) + + +def delete(elb, name): + """Delete this ELB""" + b = find_elb(elb, name) + if b: + b.delete() + print "Load Balancer %s deleted" % name + + +def add_instances(elb, name, instances): + """Add to ELB """ + b = find_elb(elb, name) + if b: + b.register_instances(instances) + return get(elb, name) + + +def remove_instances(elb, name, instances): + """Remove instance from elb """ + b = find_elb(elb, name) + if b: + b.deregister_instances(instances) + return get(elb, name) + + +def reap_instances(elb, name): + """Remove terminated instances from elb """ + b = find_elb(elb, name) + if b: + for state in b.get_instance_health(): + if (state.state == 'OutOfService' and + state.description == 'Instance is in terminated state.'): + b.deregister_instances([state.instance_id]) + return get(elb, name) + + +def enable_zone(elb, name, zone): + """Enable for elb""" + b = find_elb(elb, name) + if b: + b.enable_zones([zone]) + return get(elb, name) + + +def disable_zone(elb, name, zone): + """Disable for elb""" + b = find_elb(elb, name) + if b: + b.disable_zones([zone]) + return get(elb, name) + + +def add_listener(elb, name, listeners): + """Add listeners to a given load balancer""" + l_list = [] + for l in listeners: + l = l.split(",") + l_list.append((int(l[0]), int(l[1]), l[2])) + b = find_elb(elb, name) + if b: + b.create_listeners(l_list) + return get(elb, name) + + +def rm_listener(elb, name, ports): + """Remove listeners from a given load balancer""" + b = find_elb(elb, name) + if b: + b.delete_listeners(ports) + return get(elb, name) + + +if __name__ == "__main__": + try: + import readline + except ImportError: + pass + import boto + import sys + from optparse import OptionParser + from boto.mashups.iobject import IObject + parser = OptionParser(version=VERSION, usage=usage) + parser.add_option("-z", "--zone", + help="Operate on zone", + action="append", default=[], dest="zones") + parser.add_option("-l", "--listener", + help="Specify Listener in,out,proto", + action="append", default=[], dest="listeners") + parser.add_option("-r", "--region", + help="Region to connect to", + action="store", dest="region") + + (options, args) = parser.parse_args() + + if len(args) < 1: + parser.print_help() + sys.exit(1) + + if not options.region: + elb = boto.connect_elb() + else: + import boto.ec2.elb + elb = boto.ec2.elb.connect_to_region(options.region) + check_valid_region(elb, options.region) + + print "%s" % (elb.region.endpoint) + + command = args[0].lower() + if command in ("ls", "list"): + list(elb) + elif command == "get": + get(elb, args[1]) + elif command == "create": + if not options.listeners: + print "-l option required for command create" + sys.exit(1) + if not options.zones: + print "-z option required for command create" + sys.exit(1) + create(elb, args[1], options.zones, options.listeners) + elif command == "delete": + delete(elb, args[1]) + elif command in ("add", "put"): + add_instances(elb, args[1], args[2:]) + elif command in ("rm", "remove"): + remove_instances(elb, args[1], args[2:]) + elif command == "reap": + reap_instances(elb, args[1]) + elif command in ("en", "enable"): + enable_zone(elb, args[1], args[2]) + elif command == "disable": + disable_zone(elb, args[1], args[2]) + elif command == "addl": + if not options.listeners: + print "-l option required for command addl" + sys.exit(1) + add_listener(elb, args[1], options.listeners) + elif command == "rml": + if not args[2:]: + print "port required" + sys.exit(2) + rm_listener(elb, args[1], args[2:]) diff -r 000000000000 -r 4f3585e2f14b env/bin/fetch_file --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/fetch_file Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,46 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# Copyright (c) 2009 Chris Moyer http://coredumped.org +# +# Permission is hereby granted, free of charge, to any person obtaining a +# copy of this software and associated documentation files (the +# "Software"), to deal in the Software without restriction, including +# without limitation the rights to use, copy, modify, merge, publish, dis- +# tribute, sublicense, and/or sell copies of the Software, and to permit +# persons to whom the Software is furnished to do so, subject to the fol- +# lowing conditions: +# +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- +# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS +# IN THE SOFTWARE. +# +import sys + + +if __name__ == "__main__": + from optparse import OptionParser + usage = """%prog [options] URI +Fetch a URI using the boto library and (by default) pipe contents to STDOUT +The URI can be either an HTTP URL, or "s3://bucket_name/key_name" +""" + parser = OptionParser(version="0.1", usage=usage) + parser.add_option("-o", "--out-file", + help="File to receive output instead of STDOUT", + dest="outfile") + + (options, args) = parser.parse_args() + if len(args) < 1: + parser.print_help() + sys.exit(1) + from boto.utils import fetch_file + f = fetch_file(args[0]) + if options.outfile: + open(options.outfile, "w").write(f.read()) + else: + print(f.read()) diff -r 000000000000 -r 4f3585e2f14b env/bin/galaxy-tool-test --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/galaxy-tool-test Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from galaxy.tool_util.verify.script import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/galaxy-wait --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/galaxy-wait Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from ephemeris.sleep import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/get-tool-list --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/get-tool-list Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from ephemeris.get_tool_list_from_galaxy import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/glacier --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/glacier Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,161 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +# Copyright (c) 2012 Miguel Olivares http://moliware.com/ +# +# Permission is hereby granted, free of charge, to any person obtaining a +# copy of this software and associated documentation files (the +# "Software"), to deal in the Software without restriction, including +# without limitation the rights to use, copy, modify, merge, publish, dis- +# tribute, sublicense, and/or sell copies of the Software, and to permit +# persons to whom the Software is furnished to do so, subject to the fol- +# lowing conditions: +# +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- +# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS +# IN THE SOFTWARE. +# +""" + glacier + ~~~~~~~ + + Amazon Glacier tool built on top of boto. Look at the usage method to see + how to use it. + + Author: Miguel Olivares +""" +import sys + +from boto.glacier import connect_to_region +from getopt import getopt, GetoptError +from os.path import isfile, basename + + +COMMANDS = ('vaults', 'jobs', 'upload') + + +def usage(): + print(""" +glacier [args] + + Commands + vaults - Operations with vaults + jobs - Operations with jobs + upload - Upload files to a vault. If the vault doesn't exits, it is + created + + Common args: + --access_key - Your AWS Access Key ID. If not supplied, boto will + use the value of the environment variable + AWS_ACCESS_KEY_ID + --secret_key - Your AWS Secret Access Key. If not supplied, boto + will use the value of the environment variable + AWS_SECRET_ACCESS_KEY + --region - AWS region to use. Possible values: us-east-1, us-west-1, + us-west-2, ap-northeast-1, eu-west-1. + Default: us-east-1 + + Vaults operations: + + List vaults: + glacier vaults + + Jobs operations: + + List jobs: + glacier jobs + + Uploading files: + + glacier upload + + Examples : + glacier upload pics *.jpg + glacier upload pics a.jpg b.jpg +""") + sys.exit() + + +def connect(region, debug_level=0, access_key=None, secret_key=None): + """ Connect to a specific region """ + layer2 = connect_to_region(region, + aws_access_key_id=access_key, + aws_secret_access_key=secret_key, + debug=debug_level) + if layer2 is None: + print('Invalid region (%s)' % region) + sys.exit(1) + return layer2 + + +def list_vaults(region, access_key=None, secret_key=None): + layer2 = connect(region, access_key = access_key, secret_key = secret_key) + for vault in layer2.list_vaults(): + print(vault.arn) + + +def list_jobs(vault_name, region, access_key=None, secret_key=None): + layer2 = connect(region, access_key = access_key, secret_key = secret_key) + print(layer2.layer1.list_jobs(vault_name)) + + +def upload_files(vault_name, filenames, region, access_key=None, secret_key=None): + layer2 = connect(region, access_key = access_key, secret_key = secret_key) + layer2.create_vault(vault_name) + glacier_vault = layer2.get_vault(vault_name) + for filename in filenames: + if isfile(filename): + sys.stdout.write('Uploading %s to %s...' % (filename, vault_name)) + sys.stdout.flush() + archive_id = glacier_vault.upload_archive( + filename, + description = basename(filename)) + print(' done. Vault returned ArchiveID %s' % archive_id) + +def main(): + if len(sys.argv) < 2: + usage() + + command = sys.argv[1] + if command not in COMMANDS: + usage() + + argv = sys.argv[2:] + options = 'a:s:r:' + long_options = ['access_key=', 'secret_key=', 'region='] + try: + opts, args = getopt(argv, options, long_options) + except GetoptError as e: + usage() + + # Parse agument + access_key = secret_key = None + region = 'us-east-1' + for option, value in opts: + if option in ('-a', '--access_key'): + access_key = value + elif option in ('-s', '--secret_key'): + secret_key = value + elif option in ('-r', '--region'): + region = value + # handle each command + if command == 'vaults': + list_vaults(region, access_key, secret_key) + elif command == 'jobs': + if len(args) != 1: + usage() + list_jobs(args[0], region, access_key, secret_key) + elif command == 'upload': + if len(args) < 2: + usage() + upload_files(args[0], args[1:], region, access_key, secret_key) + + +if __name__ == '__main__': + main() diff -r 000000000000 -r 4f3585e2f14b env/bin/gxwf-abstract-export --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/gxwf-abstract-export Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from gxformat2.abstract import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/gxwf-lint --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/gxwf-lint Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from gxformat2.lint import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/gxwf-to-format2 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/gxwf-to-format2 Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from gxformat2.export import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/gxwf-to-native --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/gxwf-to-native Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from gxformat2.converter import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/gxwf-viz --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/gxwf-viz Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from gxformat2.cytoscape import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/humanfriendly --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/humanfriendly Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from humanfriendly.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/install_tool_deps --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/install_tool_deps Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from ephemeris.install_tool_deps import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/instance_events --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/instance_events Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,145 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# Copyright (c) 2011 Jim Browne http://www.42lines.net +# Borrows heavily from boto/bin/list_instances which has no attribution +# +# Permission is hereby granted, free of charge, to any person obtaining a +# copy of this software and associated documentation files (the +# "Software"), to deal in the Software without restriction, including +# without limitation the rights to use, copy, modify, merge, publish, dis- +# tribute, sublicense, and/or sell copies of the Software, and to permit +# persons to whom the Software is furnished to do so, subject to the fol- +# lowing conditions: +# +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- +# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + +VERSION="0.1" +usage = """%prog [options] +Options: + -h, --help show help message (including options list) and exit +""" + +from operator import itemgetter + +HEADERS = { + 'ID': {'get': itemgetter('id'), 'length':14}, + 'Zone': {'get': itemgetter('zone'), 'length':14}, + 'Hostname': {'get': itemgetter('dns'), 'length':20}, + 'Code': {'get': itemgetter('code'), 'length':18}, + 'Description': {'get': itemgetter('description'), 'length':30}, + 'NotBefore': {'get': itemgetter('not_before'), 'length':25}, + 'NotAfter': {'get': itemgetter('not_after'), 'length':25}, + 'T:': {'length': 30}, +} + +def get_column(name, event=None): + if name.startswith('T:'): + return event[name] + return HEADERS[name]['get'](event) + +def list(region, headers, order, completed): + """List status events for all instances in a given region""" + + import re + + ec2 = boto.connect_ec2(region=region) + + reservations = ec2.get_all_reservations() + + instanceinfo = {} + events = {} + + displaytags = [ x for x in headers if x.startswith('T:') ] + + # Collect the tag for every possible instance + for res in reservations: + for instance in res.instances: + iid = instance.id + instanceinfo[iid] = {} + for tagname in displaytags: + _, tag = tagname.split(':', 1) + instanceinfo[iid][tagname] = instance.tags.get(tag,'') + instanceinfo[iid]['dns'] = instance.public_dns_name + + stats = ec2.get_all_instance_status() + + for stat in stats: + if stat.events: + for event in stat.events: + events[stat.id] = {} + events[stat.id]['id'] = stat.id + events[stat.id]['dns'] = instanceinfo[stat.id]['dns'] + events[stat.id]['zone'] = stat.zone + for tag in displaytags: + events[stat.id][tag] = instanceinfo[stat.id][tag] + events[stat.id]['code'] = event.code + events[stat.id]['description'] = event.description + events[stat.id]['not_before'] = event.not_before + events[stat.id]['not_after'] = event.not_after + if completed and re.match('^\[Completed\]',event.description): + events[stat.id]['not_before'] = 'Completed' + events[stat.id]['not_after'] = 'Completed' + + # Create format string + format_string = "" + for h in headers: + if h.startswith('T:'): + format_string += "%%-%ds" % HEADERS['T:']['length'] + else: + format_string += "%%-%ds" % HEADERS[h]['length'] + + + print format_string % headers + print "-" * len(format_string % headers) + + for instance in sorted(events, + key=lambda ev: get_column(order, events[ev])): + e = events[instance] + print format_string % tuple(get_column(h, e) for h in headers) + +if __name__ == "__main__": + import boto + from optparse import OptionParser + from boto.ec2 import regions + + parser = OptionParser(version=VERSION, usage=usage) + parser.add_option("-a", "--all", help="check all regions", dest="all", default=False,action="store_true") + parser.add_option("-r", "--region", help="region to check (default us-east-1)", dest="region", default="us-east-1") + parser.add_option("-H", "--headers", help="Set headers (use 'T:tagname' for including tags)", default=None, action="store", dest="headers", metavar="ID,Zone,Hostname,Code,Description,NotBefore,NotAfter,T:Name") + parser.add_option("-S", "--sort", help="Header for sort order", default=None, action="store", dest="order",metavar="HeaderName") + parser.add_option("-c", "--completed", help="List time fields as \"Completed\" for completed events (Default: false)", default=False, action="store_true", dest="completed") + + (options, args) = parser.parse_args() + + if options.headers: + headers = tuple(options.headers.split(',')) + else: + headers = ('ID', 'Zone', 'Hostname', 'Code', 'NotBefore', 'NotAfter') + + if options.order: + order = options.order + else: + order = 'ID' + + if options.all: + for r in regions(): + print "Region %s" % r.name + list(r, headers, order, options.completed) + else: + # Connect the region + for r in regions(): + if r.name == options.region: + region = r + break + else: + print "Region %s not found." % options.region + sys.exit(1) + + list(r, headers, order, options.completed) diff -r 000000000000 -r 4f3585e2f14b env/bin/kill_instance --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/kill_instance Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,35 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +import sys +from optparse import OptionParser + +import boto +from boto.ec2 import regions + + + +def kill_instance(region, ids): + """Kill an instances given it's instance IDs""" + # Connect the region + ec2 = boto.connect_ec2(region=region) + for instance_id in ids: + print("Stopping instance: %s" % instance_id) + ec2.terminate_instances([instance_id]) + + +if __name__ == "__main__": + parser = OptionParser(usage="kill_instance [-r] id [id ...]") + parser.add_option("-r", "--region", help="Region (default us-east-1)", dest="region", default="us-east-1") + (options, args) = parser.parse_args() + if not args: + parser.print_help() + sys.exit(1) + for r in regions(): + if r.name == options.region: + region = r + break + else: + print("Region %s not found." % options.region) + sys.exit(1) + + kill_instance(region, args) diff -r 000000000000 -r 4f3585e2f14b env/bin/launch_instance --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/launch_instance Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,252 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# Copyright (c) 2009 Chris Moyer http://coredumped.org/ +# +# Permission is hereby granted, free of charge, to any person obtaining a +# copy of this software and associated documentation files (the +# "Software"), to deal in the Software without restriction, including +# without limitation the rights to use, copy, modify, merge, publish, dis- +# tribute, sublicense, and/or sell copies of the Software, and to permit +# persons to whom the Software is furnished to do so, subject to the fol- +# lowing conditions: +# +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- +# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + +# +# Utility to launch an EC2 Instance +# +VERSION="0.2" + + +CLOUD_INIT_SCRIPT = """#!/usr/bin/env python +f = open("/etc/boto.cfg", "w") +f.write(\"\"\"%s\"\"\") +f.close() +""" +import boto.pyami.config +import boto.utils +import re, os +from boto.compat import ConfigParser + +class Config(boto.pyami.config.Config): + """A special config class that also adds import abilities + Directly in the config file. To have a config file import + another config file, simply use "#import " where + is either a relative path or a full URL to another config + """ + + def __init__(self): + ConfigParser.__init__(self, {'working_dir' : '/mnt/pyami', 'debug' : '0'}) + + def add_config(self, file_url): + """Add a config file to this configuration + :param file_url: URL for the file to add, or a local path + :type file_url: str + """ + if not re.match("^([a-zA-Z0-9]*:\/\/)(.*)", file_url): + if not file_url.startswith("/"): + file_url = os.path.join(os.getcwd(), file_url) + file_url = "file://%s" % file_url + (base_url, file_name) = file_url.rsplit("/", 1) + base_config = boto.utils.fetch_file(file_url) + base_config.seek(0) + for line in base_config.readlines(): + match = re.match("^#import[\s\t]*([^\s^\t]*)[\s\t]*$", line) + if match: + self.add_config("%s/%s" % (base_url, match.group(1))) + base_config.seek(0) + self.readfp(base_config) + + def add_creds(self, ec2): + """Add the credentials to this config if they don't already exist""" + if not self.has_section('Credentials'): + self.add_section('Credentials') + self.set('Credentials', 'aws_access_key_id', ec2.aws_access_key_id) + self.set('Credentials', 'aws_secret_access_key', ec2.aws_secret_access_key) + + + def __str__(self): + """Get config as string""" + from StringIO import StringIO + s = StringIO() + self.write(s) + return s.getvalue() + +SCRIPTS = [] + +def scripts_callback(option, opt, value, parser): + arg = value.split(',') + if len(arg) == 1: + SCRIPTS.append(arg[0]) + else: + SCRIPTS.extend(arg) + setattr(parser.values, option.dest, SCRIPTS) + +def add_script(scr_url): + """Read a script and any scripts that are added using #import""" + base_url = '/'.join(scr_url.split('/')[:-1]) + '/' + script_raw = boto.utils.fetch_file(scr_url) + script_content = '' + for line in script_raw.readlines(): + match = re.match("^#import[\s\t]*([^\s^\t]*)[\s\t]*$", line) + #if there is an import + if match: + #Read the other script and put it in that spot + script_content += add_script("%s/%s" % (base_url, match.group(1))) + else: + #Otherwise, add the line and move on + script_content += line + return script_content + +if __name__ == "__main__": + try: + import readline + except ImportError: + pass + import sys + import time + import boto + from boto.ec2 import regions + from optparse import OptionParser + from boto.mashups.iobject import IObject + parser = OptionParser(version=VERSION, usage="%prog [options] config_url") + parser.add_option("-c", "--max-count", help="Maximum number of this type of instance to launch", dest="max_count", default="1") + parser.add_option("--min-count", help="Minimum number of this type of instance to launch", dest="min_count", default="1") + parser.add_option("--cloud-init", help="Indicates that this is an instance that uses 'CloudInit', Ubuntu's cloud bootstrap process. This wraps the config in a shell script command instead of just passing it in directly", dest="cloud_init", default=False, action="store_true") + parser.add_option("-g", "--groups", help="Security Groups to add this instance to", action="append", dest="groups") + parser.add_option("-a", "--ami", help="AMI to launch", dest="ami_id") + parser.add_option("-t", "--type", help="Type of Instance (default m1.small)", dest="type", default="m1.small") + parser.add_option("-k", "--key", help="Keypair", dest="key_name") + parser.add_option("-z", "--zone", help="Zone (default us-east-1a)", dest="zone", default="us-east-1a") + parser.add_option("-r", "--region", help="Region (default us-east-1)", dest="region", default="us-east-1") + parser.add_option("-i", "--ip", help="Elastic IP", dest="elastic_ip") + parser.add_option("-n", "--no-add-cred", help="Don't add a credentials section", default=False, action="store_true", dest="nocred") + parser.add_option("--save-ebs", help="Save the EBS volume on shutdown, instead of deleting it", default=False, action="store_true", dest="save_ebs") + parser.add_option("-w", "--wait", help="Wait until instance is running", default=False, action="store_true", dest="wait") + parser.add_option("-d", "--dns", help="Returns public and private DNS (implicates --wait)", default=False, action="store_true", dest="dns") + parser.add_option("-T", "--tag", help="Set tag", default=None, action="append", dest="tags", metavar="key:value") + parser.add_option("-s", "--scripts", help="Pass in a script or a folder containing scripts to be run when the instance starts up, assumes cloud-init. Specify scripts in a list specified by commas. If multiple scripts are specified, they are run lexically (A good way to ensure they run in the order is to prefix filenames with numbers)", type='string', action="callback", callback=scripts_callback) + parser.add_option("--role", help="IAM Role to use, this implies --no-add-cred", dest="role") + + (options, args) = parser.parse_args() + + if len(args) < 1: + parser.print_help() + sys.exit(1) + file_url = os.path.expanduser(args[0]) + + cfg = Config() + cfg.add_config(file_url) + + for r in regions(): + if r.name == options.region: + region = r + break + else: + print("Region %s not found." % options.region) + sys.exit(1) + ec2 = boto.connect_ec2(region=region) + if not options.nocred and not options.role: + cfg.add_creds(ec2) + + iobj = IObject() + if options.ami_id: + ami = ec2.get_image(options.ami_id) + else: + ami_id = options.ami_id + l = [(a, a.id, a.location) for a in ec2.get_all_images()] + ami = iobj.choose_from_list(l, prompt='Choose AMI') + + if options.key_name: + key_name = options.key_name + else: + l = [(k, k.name, '') for k in ec2.get_all_key_pairs()] + key_name = iobj.choose_from_list(l, prompt='Choose Keypair').name + + if options.groups: + groups = options.groups + else: + groups = [] + l = [(g, g.name, g.description) for g in ec2.get_all_security_groups()] + g = iobj.choose_from_list(l, prompt='Choose Primary Security Group') + while g != None: + groups.append(g) + l.remove((g, g.name, g.description)) + g = iobj.choose_from_list(l, prompt='Choose Additional Security Group (0 to quit)') + + user_data = str(cfg) + # If it's a cloud init AMI, + # then we need to wrap the config in our + # little wrapper shell script + + if options.cloud_init: + user_data = CLOUD_INIT_SCRIPT % user_data + scriptuples = [] + if options.scripts: + scripts = options.scripts + scriptuples.append(('user_data', user_data)) + for scr in scripts: + scr_url = scr + if not re.match("^([a-zA-Z0-9]*:\/\/)(.*)", scr_url): + if not scr_url.startswith("/"): + scr_url = os.path.join(os.getcwd(), scr_url) + try: + newfiles = os.listdir(scr_url) + for f in newfiles: + #put the scripts in the folder in the array such that they run in the correct order + scripts.insert(scripts.index(scr) + 1, scr.split("/")[-1] + "/" + f) + except OSError: + scr_url = "file://%s" % scr_url + try: + scriptuples.append((scr, add_script(scr_url))) + except Exception as e: + pass + + user_data = boto.utils.write_mime_multipart(scriptuples, compress=True) + + shutdown_proc = "terminate" + if options.save_ebs: + shutdown_proc = "save" + + instance_profile_name = None + if options.role: + instance_profile_name = options.role + + r = ami.run(min_count=int(options.min_count), max_count=int(options.max_count), + key_name=key_name, user_data=user_data, + security_groups=groups, instance_type=options.type, + placement=options.zone, instance_initiated_shutdown_behavior=shutdown_proc, + instance_profile_name=instance_profile_name) + + instance = r.instances[0] + + if options.tags: + for tag_pair in options.tags: + name = tag_pair + value = '' + if ':' in tag_pair: + name, value = tag_pair.split(':', 1) + instance.add_tag(name, value) + + if options.dns: + options.wait = True + + if not options.wait: + sys.exit(0) + + while True: + instance.update() + if instance.state == 'running': + break + time.sleep(3) + + if options.dns: + print("Public DNS name: %s" % instance.public_dns_name) + print("Private DNS name: %s" % instance.private_dns_name) diff -r 000000000000 -r 4f3585e2f14b env/bin/list_instances --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/list_instances Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,90 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +import sys +from operator import attrgetter +from optparse import OptionParser + +import boto +from boto.ec2 import regions + + +HEADERS = { + 'ID': {'get': attrgetter('id'), 'length':15}, + 'Zone': {'get': attrgetter('placement'), 'length':15}, + 'Groups': {'get': attrgetter('groups'), 'length':30}, + 'Hostname': {'get': attrgetter('public_dns_name'), 'length':50}, + 'PrivateHostname': {'get': attrgetter('private_dns_name'), 'length':50}, + 'State': {'get': attrgetter('state'), 'length':15}, + 'Image': {'get': attrgetter('image_id'), 'length':15}, + 'Type': {'get': attrgetter('instance_type'), 'length':15}, + 'IP': {'get': attrgetter('ip_address'), 'length':16}, + 'PrivateIP': {'get': attrgetter('private_ip_address'), 'length':16}, + 'Key': {'get': attrgetter('key_name'), 'length':25}, + 'T:': {'length': 30}, +} + +def get_column(name, instance=None): + if name.startswith('T:'): + _, tag = name.split(':', 1) + return instance.tags.get(tag, '') + return HEADERS[name]['get'](instance) + + +def main(): + parser = OptionParser() + parser.add_option("-r", "--region", help="Region (default us-east-1)", dest="region", default="us-east-1") + parser.add_option("-H", "--headers", help="Set headers (use 'T:tagname' for including tags)", default=None, action="store", dest="headers", metavar="ID,Zone,Groups,Hostname,State,T:Name") + parser.add_option("-t", "--tab", help="Tab delimited, skip header - useful in shell scripts", action="store_true", default=False) + parser.add_option("-f", "--filter", help="Filter option sent to DescribeInstances API call, format is key1=value1,key2=value2,...", default=None) + (options, args) = parser.parse_args() + + + # Connect the region + for r in regions(): + if r.name == options.region: + region = r + break + else: + print("Region %s not found." % options.region) + sys.exit(1) + ec2 = boto.connect_ec2(region=region) + + # Read headers + if options.headers: + headers = tuple(options.headers.split(',')) + else: + headers = ("ID", 'Zone', "Groups", "Hostname") + + # Create format string + format_string = "" + for h in headers: + if h.startswith('T:'): + format_string += "%%-%ds" % HEADERS['T:']['length'] + else: + format_string += "%%-%ds" % HEADERS[h]['length'] + + + # Parse filters (if any) + if options.filter: + filters = dict([entry.split('=') for entry in options.filter.split(',')]) + else: + filters = {} + + # List and print + + if not options.tab: + print(format_string % headers) + print("-" * len(format_string % headers)) + + for r in ec2.get_all_reservations(filters=filters): + groups = [g.name for g in r.groups] + for i in r.instances: + i.groups = ','.join(groups) + if options.tab: + print("\t".join(tuple(get_column(h, i) for h in headers))) + else: + print(format_string % tuple(get_column(h, i) for h in headers)) + + +if __name__ == "__main__": + main() diff -r 000000000000 -r 4f3585e2f14b env/bin/lss3 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/lss3 Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,113 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +import boto +from boto.exception import S3ResponseError +from boto.s3.connection import OrdinaryCallingFormat + + +def sizeof_fmt(num): + for x in ['b ', 'KB', 'MB', 'GB', 'TB', 'XB']: + if num < 1024.0: + return "%3.1f %s" % (num, x) + num /= 1024.0 + return "%3.1f %s" % (num, x) + + +def list_bucket(b, prefix=None, marker=None): + """List everything in a bucket""" + from boto.s3.prefix import Prefix + from boto.s3.key import Key + total = 0 + + if prefix: + if not prefix.endswith("/"): + prefix = prefix + "/" + query = b.list(prefix=prefix, delimiter="/", marker=marker) + print("%s" % prefix) + else: + query = b.list(delimiter="/", marker=marker) + + num = 0 + for k in query: + num += 1 + mode = "-rwx---" + if isinstance(k, Prefix): + mode = "drwxr--" + size = 0 + else: + size = k.size + for g in k.get_acl().acl.grants: + if g.id == None: + if g.permission == "READ": + mode = "-rwxr--" + elif g.permission == "FULL_CONTROL": + mode = "-rwxrwx" + if isinstance(k, Key): + print("%s\t%s\t%010s\t%s" % (mode, k.last_modified, + sizeof_fmt(size), k.name)) + else: + #If it's not a Key object, it doesn't have a last_modified time, so + #print nothing instead + print("%s\t%s\t%010s\t%s" % (mode, ' ' * 24, + sizeof_fmt(size), k.name)) + total += size + print ("=" * 80) + print ("\t\tTOTAL: \t%010s \t%i Files" % (sizeof_fmt(total), num)) + + +def list_buckets(s3, display_tags=False): + """List all the buckets""" + for b in s3.get_all_buckets(): + print(b.name) + if display_tags: + try: + tags = b.get_tags() + for tag in tags[0]: + print(" %s:%s" % (tag.key, tag.value)) + except S3ResponseError as e: + if e.status != 404: + raise + + +def main(): + import optparse + import sys + + usage = "usage: %prog [options] [BUCKET1] [BUCKET2]" + description = "List all S3 buckets OR list keys in the named buckets" + parser = optparse.OptionParser(description=description, usage=usage) + parser.add_option('-m', '--marker', + help='The S3 key where the listing starts after it.') + parser.add_option('-t', '--tags', action='store_true', + help='Display tags when listing all buckets.') + options, buckets = parser.parse_args() + marker = options.marker + + if not buckets: + list_buckets(boto.connect_s3(), options.tags) + sys.exit(0) + + if options.tags: + print("-t option only works for the overall bucket list") + sys.exit(1) + + pairs = [] + mixedCase = False + for name in buckets: + if "/" in name: + pairs.append(name.split("/", 1)) + else: + pairs.append([name, None]) + if pairs[-1][0].lower() != pairs[-1][0]: + mixedCase = True + + if mixedCase: + s3 = boto.connect_s3(calling_format=OrdinaryCallingFormat()) + else: + s3 = boto.connect_s3() + + for name, prefix in pairs: + list_bucket(s3.get_bucket(name), prefix, marker=marker) + + +if __name__ == "__main__": + main() diff -r 000000000000 -r 4f3585e2f14b env/bin/mturk --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/mturk Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,514 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# Copyright 2012, 2014 Kodi Arfer +# +# Permission is hereby granted, free of charge, to any person obtaining a +# copy of this software and associated documentation files (the +# "Software"), to deal in the Software without restriction, including +# without limitation the rights to use, copy, modify, merge, publish, dis- +# tribute, sublicense, and/or sell copies of the Software, and to permit +# persons to whom the Software is furnished to do so, subject to the fol- +# lowing conditions: +# +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- +# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + +import argparse # Hence, Python 2.7 is required. +import sys +import os.path +import string +import inspect +import datetime, calendar +import boto.mturk.connection, boto.mturk.price, boto.mturk.question, boto.mturk.qualification +from boto.compat import json + +# -------------------------------------------------- +# Globals +# ------------------------------------------------- + +interactive = False +con = None +mturk_website = None + +default_nicknames_path = os.path.expanduser('~/.boto_mturkcli_hit_nicknames') +nicknames = {} +nickname_pool = set(string.ascii_lowercase) + +get_assignments_page_size = 100 + +time_units = dict( + s = 1, + min = 60, + h = 60 * 60, + d = 24 * 60 * 60) + +qual_requirements = dict( + Adult = '00000000000000000060', + Locale = '00000000000000000071', + NumberHITsApproved = '00000000000000000040', + PercentAssignmentsSubmitted = '00000000000000000000', + PercentAssignmentsAbandoned = '00000000000000000070', + PercentAssignmentsReturned = '000000000000000000E0', + PercentAssignmentsApproved = '000000000000000000L0', + PercentAssignmentsRejected = '000000000000000000S0') + +qual_comparators = {v : k for k, v in dict( + LessThan = '<', LessThanOrEqualTo = '<=', + GreaterThan = '>', GreaterThanOrEqualTo = '>=', + EqualTo = '==', NotEqualTo = '!=', + Exists = 'exists').items()} + +example_config_file = '''Example configuration file: + + { + "title": "Pick your favorite color", + "description": "In this task, you are asked to pick your favorite color.", + "reward": 0.50, + "assignments": 10, + "duration": "20 min", + "keywords": ["color", "favorites", "survey"], + "lifetime": "7 d", + "approval_delay": "14 d", + "qualifications": [ + "PercentAssignmentsApproved > 90", + "Locale == US", + "2ARFPLSP75KLA8M8DH1HTEQVJT3SY6 exists" + ], + "question_url": "http://example.com/myhit", + "question_frame_height": 450 + }''' + +# -------------------------------------------------- +# Subroutines +# -------------------------------------------------- + +def unjson(path): + with open(path) as o: + return json.load(o) + +def add_argparse_arguments(parser): + parser.add_argument('-P', '--production', + dest = 'sandbox', action = 'store_false', default = True, + help = 'use the production site (default: use the sandbox)') + parser.add_argument('--nicknames', + dest = 'nicknames_path', metavar = 'PATH', + default = default_nicknames_path, + help = 'where to store HIT nicknames (default: {})'.format( + default_nicknames_path)) + +def init_by_args(args): + init(args.sandbox, args.nicknames_path) + +def init(sandbox = False, nicknames_path = default_nicknames_path): + global con, mturk_website, nicknames, original_nicknames + + mturk_website = 'workersandbox.mturk.com' if sandbox else 'www.mturk.com' + con = boto.mturk.connection.MTurkConnection( + host = 'mechanicalturk.sandbox.amazonaws.com' if sandbox else 'mechanicalturk.amazonaws.com') + + try: + nicknames = unjson(nicknames_path) + except IOError: + nicknames = {} + original_nicknames = nicknames.copy() + +def save_nicknames(nicknames_path = default_nicknames_path): + if nicknames != original_nicknames: + with open(nicknames_path, 'w') as o: + json.dump(nicknames, o, sort_keys = True, indent = 4) + print >>o + +def parse_duration(s): + '''Parses durations like "2 d", "48 h", "2880 min", +"172800 s", or "172800".''' + x = s.split() + return int(x[0]) * time_units['s' if len(x) == 1 else x[1]] +def display_duration(n): + for unit, m in sorted(time_units.items(), key = lambda x: -x[1]): + if n % m == 0: + return '{} {}'.format(n / m, unit) + +def parse_qualification(inp): + '''Parses qualifications like "PercentAssignmentsApproved > 90", +"Locale == US", and "2ARFPLSP75KLA8M8DH1HTEQVJT3SY6 exists".''' + inp = inp.split() + name, comparator, value = inp.pop(0), inp.pop(0), (inp[0] if len(inp) else None) + qtid = qual_requirements.get(name) + if qtid is None: + # Treat "name" as a Qualification Type ID. + qtid = name + if qtid == qual_requirements['Locale']: + return boto.mturk.qualification.LocaleRequirement( + qual_comparators[comparator], + value, + required_to_preview = False) + return boto.mturk.qualification.Requirement( + qtid, + qual_comparators[comparator], + value, + required_to_preview = qtid == qual_requirements['Adult']) + # Thus required_to_preview is true only for the + # Worker_Adult requirement. + +def preview_url(hit): + return 'https://{}/mturk/preview?groupId={}'.format( + mturk_website, hit.HITTypeId) + +def parse_timestamp(s): + '''Takes a timestamp like "2012-11-24T16:34:41Z". + +Returns a datetime object in the local time zone.''' + return datetime.datetime.fromtimestamp( + calendar.timegm( + datetime.datetime.strptime(s, '%Y-%m-%dT%H:%M:%SZ').timetuple())) + +def get_hitid(nickname_or_hitid): + return nicknames.get(nickname_or_hitid) or nickname_or_hitid + +def get_nickname(hitid): + for k, v in nicknames.items(): + if v == hitid: + return k + return None + +def display_datetime(dt): + return dt.strftime('%e %b %Y, %l:%M %P') + +def display_hit(hit, verbose = False): + et = parse_timestamp(hit.Expiration) + return '\n'.join([ + '{} - {} ({}, {}, {})'.format( + get_nickname(hit.HITId), + hit.Title, + hit.FormattedPrice, + display_duration(int(hit.AssignmentDurationInSeconds)), + hit.HITStatus), + 'HIT ID: ' + hit.HITId, + 'Type ID: ' + hit.HITTypeId, + 'Group ID: ' + hit.HITGroupId, + 'Preview: ' + preview_url(hit), + 'Created {} {}'.format( + display_datetime(parse_timestamp(hit.CreationTime)), + 'Expired' if et <= datetime.datetime.now() else + 'Expires ' + display_datetime(et)), + 'Assignments: {} -- {} avail, {} pending, {} reviewable, {} reviewed'.format( + hit.MaxAssignments, + hit.NumberOfAssignmentsAvailable, + hit.NumberOfAssignmentsPending, + int(hit.MaxAssignments) - (int(hit.NumberOfAssignmentsAvailable) + int(hit.NumberOfAssignmentsPending) + int(hit.NumberOfAssignmentsCompleted)), + hit.NumberOfAssignmentsCompleted) + if hasattr(hit, 'NumberOfAssignmentsAvailable') + else 'Assignments: {} total'.format(hit.MaxAssignments), + # For some reason, SearchHITs includes the + # NumberOfAssignmentsFoobar fields but GetHIT doesn't. + ] + ([] if not verbose else [ + '\nDescription: ' + hit.Description, + '\nKeywords: ' + hit.Keywords + ])) + '\n' + +def digest_assignment(a): + return dict( + answers = {str(x.qid): str(x.fields[0]) for x in a.answers[0]}, + **{k: str(getattr(a, k)) for k in ( + 'AcceptTime', 'SubmitTime', + 'HITId', 'AssignmentId', 'WorkerId', + 'AssignmentStatus')}) + +# -------------------------------------------------- +# Commands +# -------------------------------------------------- + +def get_balance(): + return con.get_account_balance() + +def show_hit(hit): + return display_hit(con.get_hit(hit)[0], verbose = True) + +def list_hits(): + 'Lists your 10 most recently created HITs, with the most recent last.' + return '\n'.join(reversed(map(display_hit, con.search_hits( + sort_by = 'CreationTime', + sort_direction = 'Descending', + page_size = 10)))) + +def make_hit(title, description, keywords, reward, question_url, question_frame_height, duration, assignments, approval_delay, lifetime, qualifications = []): + r = con.create_hit( + title = title, + description = description, + keywords = con.get_keywords_as_string(keywords), + reward = con.get_price_as_price(reward), + question = boto.mturk.question.ExternalQuestion( + question_url, + question_frame_height), + duration = parse_duration(duration), + qualifications = boto.mturk.qualification.Qualifications( + map(parse_qualification, qualifications)), + max_assignments = assignments, + approval_delay = parse_duration(approval_delay), + lifetime = parse_duration(lifetime)) + nick = None + available_nicks = nickname_pool - set(nicknames.keys()) + if available_nicks: + nick = min(available_nicks) + nicknames[nick] = r[0].HITId + if interactive: + print 'Nickname:', nick + print 'HIT ID:', r[0].HITId + print 'Preview:', preview_url(r[0]) + else: + return r[0] + +def extend_hit(hit, assignments_increment = None, expiration_increment = None): + con.extend_hit(hit, assignments_increment, expiration_increment) + +def expire_hit(hit): + con.expire_hit(hit) + +def delete_hit(hit): + '''Deletes a HIT using DisableHIT. + +Unreviewed assignments get automatically approved. Unsubmitted +assignments get automatically approved upon submission. + +The API docs say DisableHIT doesn't work with Reviewable HITs, +but apparently, it does.''' + con.disable_hit(hit) + global nicknames + nicknames = {k: v for k, v in nicknames.items() if v != hit} + +def list_assignments(hit, only_reviewable = False): + # Accumulate all relevant assignments, one page of results at + # a time. + assignments = [] + page = 1 + while True: + rs = con.get_assignments( + hit_id = hit, + page_size = get_assignments_page_size, + page_number = page, + status = 'Submitted' if only_reviewable else None) + assignments += map(digest_assignment, rs) + if len(assignments) >= int(rs.TotalNumResults): + break + page += 1 + if interactive: + print json.dumps(assignments, sort_keys = True, indent = 4) + print ' '.join([a['AssignmentId'] for a in assignments]) + print ' '.join([a['WorkerId'] + ',' + a['AssignmentId'] for a in assignments]) + else: + return assignments + +def grant_bonus(message, amount, pairs): + for worker, assignment in pairs: + con.grant_bonus(worker, assignment, con.get_price_as_price(amount), message) + if interactive: print 'Bonused', worker + +def approve_assignments(message, assignments): + for a in assignments: + con.approve_assignment(a, message) + if interactive: print 'Approved', a + +def reject_assignments(message, assignments): + for a in assignments: + con.reject_assignment(a, message) + if interactive: print 'Rejected', a + +def unreject_assignments(message, assignments): + for a in assignments: + con.approve_rejected_assignment(a, message) + if interactive: print 'Unrejected', a + +def notify_workers(subject, text, workers): + con.notify_workers(workers, subject, text) + +def give_qualification(qualification, workers, value = 1, notify = True): + for w in workers: + con.assign_qualification(qualification, w, value, notify) + if interactive: print 'Gave to', w + +def revoke_qualification(qualification, workers, message = None): + for w in workers: + con.revoke_qualification(w, qualification, message) + if interactive: print 'Revoked from', w + +# -------------------------------------------------- +# Mainline code +# -------------------------------------------------- + +if __name__ == '__main__': + interactive = True + + parser = argparse.ArgumentParser() + add_argparse_arguments(parser) + subs = parser.add_subparsers() + + sub = subs.add_parser('bal', + help = 'display your prepaid balance') + sub.set_defaults(f = get_balance, a = lambda: []) + + sub = subs.add_parser('hit', + help = 'get information about a HIT') + sub.add_argument('HIT', + help = 'nickname or ID of the HIT to show') + sub.set_defaults(f = show_hit, a = lambda: + [get_hitid(args.HIT)]) + + sub = subs.add_parser('hits', + help = 'list all your HITs') + sub.set_defaults(f = list_hits, a = lambda: []) + + sub = subs.add_parser('new', + help = 'create a new HIT (external questions only)', + epilog = example_config_file, + formatter_class = argparse.RawDescriptionHelpFormatter) + sub.add_argument('JSON_PATH', + help = 'path to JSON configuration file for the HIT') + sub.add_argument('-u', '--question-url', dest = 'question_url', + metavar = 'URL', + help = 'URL for the external question') + sub.add_argument('-a', '--assignments', dest = 'assignments', + type = int, metavar = 'N', + help = 'number of assignments') + sub.add_argument('-r', '--reward', dest = 'reward', + type = float, metavar = 'PRICE', + help = 'reward amount, in USD') + sub.set_defaults(f = make_hit, a = lambda: dict( + unjson(args.JSON_PATH).items() + [(k, getattr(args, k)) + for k in ('question_url', 'assignments', 'reward') + if getattr(args, k) is not None])) + + sub = subs.add_parser('extend', + help = 'add assignments or time to a HIT') + sub.add_argument('HIT', + help = 'nickname or ID of the HIT to extend') + sub.add_argument('-a', '--assignments', dest = 'assignments', + metavar = 'N', type = int, + help = 'number of assignments to add') + sub.add_argument('-t', '--time', dest = 'time', + metavar = 'T', + help = 'amount of time to add to the expiration date') + sub.set_defaults(f = extend_hit, a = lambda: + [get_hitid(args.HIT), args.assignments, + args.time and parse_duration(args.time)]) + + sub = subs.add_parser('expire', + help = 'force a HIT to expire without deleting it') + sub.add_argument('HIT', + help = 'nickname or ID of the HIT to expire') + sub.set_defaults(f = expire_hit, a = lambda: + [get_hitid(args.HIT)]) + + sub = subs.add_parser('rm', + help = 'delete a HIT') + sub.add_argument('HIT', + help = 'nickname or ID of the HIT to delete') + sub.set_defaults(f = delete_hit, a = lambda: + [get_hitid(args.HIT)]) + + sub = subs.add_parser('as', + help = "list a HIT's submitted assignments") + sub.add_argument('HIT', + help = 'nickname or ID of the HIT to get assignments for') + sub.add_argument('-r', '--reviewable', dest = 'only_reviewable', + action = 'store_true', + help = 'show only unreviewed assignments') + sub.set_defaults(f = list_assignments, a = lambda: + [get_hitid(args.HIT), args.only_reviewable]) + + for command, fun, helpmsg in [ + ('approve', approve_assignments, 'approve assignments'), + ('reject', reject_assignments, 'reject assignments'), + ('unreject', unreject_assignments, 'approve previously rejected assignments')]: + sub = subs.add_parser(command, help = helpmsg) + sub.add_argument('ASSIGNMENT', nargs = '+', + help = 'ID of an assignment') + sub.add_argument('-m', '--message', dest = 'message', + metavar = 'TEXT', + help = 'feedback message shown to workers') + sub.set_defaults(f = fun, a = lambda: + [args.message, args.ASSIGNMENT]) + + sub = subs.add_parser('bonus', + help = 'give some workers a bonus') + sub.add_argument('AMOUNT', type = float, + help = 'bonus amount, in USD') + sub.add_argument('MESSAGE', + help = 'the reason for the bonus (shown to workers in an email sent by MTurk)') + sub.add_argument('WIDAID', nargs = '+', + help = 'a WORKER_ID,ASSIGNMENT_ID pair') + sub.set_defaults(f = grant_bonus, a = lambda: + [args.MESSAGE, args.AMOUNT, + [p.split(',') for p in args.WIDAID]]) + + sub = subs.add_parser('notify', + help = 'send a message to some workers') + sub.add_argument('SUBJECT', + help = 'subject of the message') + sub.add_argument('MESSAGE', + help = 'text of the message') + sub.add_argument('WORKER', nargs = '+', + help = 'ID of a worker') + sub.set_defaults(f = notify_workers, a = lambda: + [args.SUBJECT, args.MESSAGE, args.WORKER]) + + sub = subs.add_parser('give-qual', + help = 'give a qualification to some workers') + sub.add_argument('QUAL', + help = 'ID of the qualification') + sub.add_argument('WORKER', nargs = '+', + help = 'ID of a worker') + sub.add_argument('-v', '--value', dest = 'value', + metavar = 'N', type = int, default = 1, + help = 'value of the qualification') + sub.add_argument('--dontnotify', dest = 'notify', + action = 'store_false', default = True, + help = "don't notify workers") + sub.set_defaults(f = give_qualification, a = lambda: + [args.QUAL, args.WORKER, args.value, args.notify]) + + sub = subs.add_parser('revoke-qual', + help = 'revoke a qualification from some workers') + sub.add_argument('QUAL', + help = 'ID of the qualification') + sub.add_argument('WORKER', nargs = '+', + help = 'ID of a worker') + sub.add_argument('-m', '--message', dest = 'message', + metavar = 'TEXT', + help = 'the reason the qualification was revoked (shown to workers in an email sent by MTurk)') + sub.set_defaults(f = revoke_qualification, a = lambda: + [args.QUAL, args.WORKER, args.message]) + + args = parser.parse_args() + + init_by_args(args) + + f = args.f + a = args.a() + if isinstance(a, dict): + # We do some introspective gymnastics so we can produce a + # less incomprehensible error message if some arguments + # are missing. + spec = inspect.getargspec(f) + missing = set(spec.args[: len(spec.args) - len(spec.defaults)]) - set(a.keys()) + if missing: + raise ValueError('Missing arguments: ' + ', '.join(missing)) + doit = lambda: f(**a) + else: + doit = lambda: f(*a) + + try: + x = doit() + except boto.mturk.connection.MTurkRequestError as e: + print 'MTurk error:', e.error_message + sys.exit(1) + + if x is not None: + print x + + save_nicknames() diff -r 000000000000 -r 4f3585e2f14b env/bin/mulled-build --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/mulled-build Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from galaxy.tool_util.deps.mulled.mulled_build import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/mulled-build-channel --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/mulled-build-channel Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from galaxy.tool_util.deps.mulled.mulled_build_channel import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/mulled-build-files --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/mulled-build-files Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from galaxy.tool_util.deps.mulled.mulled_build_files import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/mulled-build-tool --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/mulled-build-tool Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from galaxy.tool_util.deps.mulled.mulled_build_tool import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/mulled-list --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/mulled-list Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from galaxy.tool_util.deps.mulled.mulled_list import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/mulled-search --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/mulled-search Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from galaxy.tool_util.deps.mulled.mulled_search import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/mulled-update-singularity-containers --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/mulled-update-singularity-containers Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from galaxy.tool_util.deps.mulled.mulled_update_singularity_containers import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/pip --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/pip Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/pip3 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/pip3 Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/pip3.9 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/pip3.9 Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/planemo --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/planemo Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from planemo.cli import planemo +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(planemo()) diff -r 000000000000 -r 4f3585e2f14b env/bin/prov-compare --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/prov-compare Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,127 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# encoding: utf-8 +""" +prov-compare -- Compare two PROV-JSON, PROV-XML, or RDF (PROV-O) files for equivalence + +@author: Trung Dong Huynh + +@copyright: 2016 University of Southampton, United Kingdom. All rights reserved. + +@license: MIT Licence + +@contact: trungdong@donggiang.com +@deffield updated: 2016-10-19 +""" + +from argparse import ArgumentParser, RawDescriptionHelpFormatter, FileType +import os +import sys +import logging +import traceback +import six + +from prov.model import ProvDocument + + +logger = logging.getLogger(__name__) + +__all__ = [] +__version__ = 0.1 +__date__ = '2015-06-16' +__updated__ = '2016-10-19' + +DEBUG = 0 +TESTRUN = 0 +PROFILE = 0 + + +@six.python_2_unicode_compatible +class CLIError(Exception): + """Generic exception to raise and log different fatal errors.""" + def __init__(self, msg): + super(CLIError).__init__(type(self)) + self.msg = "E: %s" % msg + + def __str__(self): + return self.msg + + +def main(argv=None): # IGNORE:C0111 + """Command line options.""" + + if argv is None: + argv = sys.argv + else: + sys.argv.extend(argv) + + program_name = os.path.basename(sys.argv[0]) + program_version = "v%s" % __version__ + program_build_date = str(__updated__) + program_version_message = '%%(prog)s %s (%s)' % (program_version, program_build_date) + program_shortdesc = __import__('__main__').__doc__.split("\n")[1] + program_license = '''%s + + Created by Trung Dong Huynh on %s. + Copyright 2016 University of Southampton. All rights reserved. + + Licensed under the MIT License + https://github.com/trungdong/prov/blob/master/LICENSE + + Distributed on an "AS IS" basis without warranties + or conditions of any kind, either express or implied. + +USAGE +''' % (program_shortdesc, str(__date__)) + + try: + # Setup argument parser + parser = ArgumentParser(description=program_license, formatter_class=RawDescriptionHelpFormatter) + parser.add_argument('file1', nargs='?', type=FileType('r')) + parser.add_argument('file2', nargs='?', type=FileType('r')) + parser.add_argument('-f', '--format1', dest='format1', action='store', default='json', + help='File 1\'s format: json or xml') + parser.add_argument('-F', '--format2', dest='format2', action='store', default='json', + help='File 2\'s format: json or xml') + parser.add_argument('-V', '--version', action='version', version=program_version_message) + + args = None + try: + # Process arguments + args = parser.parse_args() + doc1 = ProvDocument.deserialize(args.file1, format=args.format1.lower()) + doc2 = ProvDocument.deserialize(args.file2, format=args.format2.lower()) + return doc1 != doc2 + + finally: + if args: + if args.file1: + args.file1.close() + if args.file2: + args.file2.close() + + except Exception as e: + if DEBUG or TESTRUN: + traceback.print_exc() + raise e + indent = len(program_name) * " " + sys.stderr.write(program_name + ": " + str(e) + "\n") + sys.stderr.write(indent + " for help use --help") + return 2 + +if __name__ == "__main__": + logging.basicConfig(level=(logging.DEBUG if DEBUG else logging.INFO)) + if TESTRUN: + import doctest + doctest.testmod() + if PROFILE: + import cProfile + import pstats + profile_filename = 'prov_compare_profile.txt' + cProfile.run('main()', profile_filename) + statsfile = open("profile_stats.txt", "wb") + p = pstats.Stats(profile_filename, stream=statsfile) + stats = p.strip_dirs().sort_stats('cumulative') + stats.print_stats() + statsfile.close() + sys.exit(0) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/prov-convert --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/prov-convert Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,152 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# encoding: utf-8 +""" +convert -- Convert PROV-JSON to RDF, PROV-N, PROV-XML, or graphical formats (SVG, PDF, PNG) + +@author: Trung Dong Huynh + +@copyright: 2016 University of Southampton, United Kingdom. All rights reserved. + +@license: MIT License + +@contact: trungdong@donggiang.com +@deffield updated: 2016-10-19 +""" + +from argparse import ArgumentParser, RawDescriptionHelpFormatter, FileType +import os +import sys +import logging +import traceback +import six + +from prov.model import ProvDocument +from prov import serializers + + +logger = logging.getLogger(__name__) + +__all__ = [] +__version__ = 0.1 +__date__ = '2014-03-14' +__updated__ = '2016-10-19' + +DEBUG = 0 +TESTRUN = 0 +PROFILE = 0 + +GRAPHVIZ_SUPPORTED_FORMATS = { + 'bmp', 'canon', 'cmap', 'cmapx', 'cmapx_np', 'dot', 'eps', 'fig', 'gtk', 'gv', 'ico', 'imap', 'imap_np', 'ismap', + 'jpe', 'jpeg', 'jpg', 'pdf', 'plain', 'plain-ext', 'png', 'ps', 'ps2', 'svg', 'svgz', 'tif', 'tiff', 'tk', + 'vml', 'vmlz', 'x11', 'xdot', 'xlib' +} + + +@six.python_2_unicode_compatible +class CLIError(Exception): + """Generic exception to raise and log different fatal errors.""" + def __init__(self, msg): + super(CLIError).__init__(type(self)) + self.msg = "E: %s" % msg + + def __str__(self): + return self.msg + + +def convert_file(infile, outfile, output_format): + prov_doc = ProvDocument.deserialize(infile) + + # Formats not supported by prov.serializers + if output_format == 'provn': + outfile.write(prov_doc.get_provn().encode()) + elif output_format in GRAPHVIZ_SUPPORTED_FORMATS: + from prov.dot import prov_to_dot + dot = prov_to_dot(prov_doc) + content = dot.create(format=output_format) + outfile.write(content) + else: + # Try supported serializers: + try: + prov_doc.serialize(outfile, format=output_format) + except serializers.DoNotExist: + raise CLIError('Output format "%s" is not supported.' % output_format) + + +def main(argv=None): # IGNORE:C0111 + """Command line options.""" + + if argv is None: + argv = sys.argv + else: + sys.argv.extend(argv) + + program_name = os.path.basename(sys.argv[0]) + program_version = "v%s" % __version__ + program_build_date = str(__updated__) + program_version_message = '%%(prog)s %s (%s)' % (program_version, program_build_date) + program_shortdesc = __import__('__main__').__doc__.split("\n")[1] + program_license = '''%s + + Created by Trung Dong Huynh on %s. + Copyright 2016 University of Southampton. All rights reserved. + + Licensed under the MIT License + https://github.com/trungdong/prov/blob/master/LICENSE + + Distributed on an "AS IS" basis without warranties + or conditions of any kind, either express or implied. + +USAGE +''' % (program_shortdesc, str(__date__)) + + try: + # Setup argument parser + parser = ArgumentParser(description=program_license, formatter_class=RawDescriptionHelpFormatter) + parser.add_argument('-f', '--format', dest='format', action='store', default='json', + help='output format: json, xml, provn, or one supported by GraphViz (e.g. svg, pdf)') + parser.add_argument('infile', nargs='?', type=FileType('r'), default=sys.stdin) + parser.add_argument('outfile', nargs='?', type=FileType('wb'), default=sys.stdout) + parser.add_argument('-V', '--version', action='version', version=program_version_message) + + args = None + try: + # Process arguments + args = parser.parse_args() + convert_file(args.infile, args.outfile, args.format.lower()) + finally: + if args: + if args.infile: + args.infile.close() + if args.outfile: + args.outfile.close() + + return 0 + except KeyboardInterrupt: + # handle keyboard interrupt + return 0 + except Exception as e: + if DEBUG or TESTRUN: + traceback.print_exc() + raise e + indent = len(program_name) * " " + sys.stderr.write(program_name + ": " + str(e) + "\n") + sys.stderr.write(indent + " for help use --help") + return 2 + +if __name__ == "__main__": + logging.basicConfig(level=(logging.DEBUG if DEBUG else logging.INFO)) + if TESTRUN: + import doctest + doctest.testmod() + if PROFILE: + import cProfile + import pstats + profile_filename = 'converter_profile.txt' + cProfile.run('main()', profile_filename) + statsfile = open("profile_stats.txt", "wb") + p = pstats.Stats(profile_filename, stream=statsfile) + stats = p.strip_dirs().sort_stats('cumulative') + stats.print_stats() + statsfile.close() + sys.exit(0) + sys.exit(main()) \ No newline at end of file diff -r 000000000000 -r 4f3585e2f14b env/bin/pyami_sendmail --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/pyami_sendmail Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,52 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# Copyright (c) 2010 Chris Moyer http://coredumped.org/ +# +# Permission is hereby granted, free of charge, to any person obtaining a +# copy of this software and associated documentation files (the +# "Software"), to deal in the Software without restriction, including +# without limitation the rights to use, copy, modify, merge, publish, dis- +# tribute, sublicense, and/or sell copies of the Software, and to permit +# persons to whom the Software is furnished to do so, subject to the fol- +# lowing conditions: +# +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- +# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + +# +# Send Mail from a PYAMI instance, or anything that has a boto.cfg +# properly set up +# +VERSION="0.1" +usage = """%prog [options] +Sends whatever is on stdin to the recipient specified by your boto.cfg +or whoevery you specify in the options here. +""" + +if __name__ == "__main__": + from boto.utils import notify + import sys + from optparse import OptionParser + parser = OptionParser(version=VERSION, usage=usage) + parser.add_option("-t", "--to", help="Optional to address to send to (default from your boto.cfg)", action="store", default=None, dest="to") + parser.add_option("-s", "--subject", help="Optional Subject to send this report as", action="store", default="Report", dest="subject") + parser.add_option("-f", "--file", help="Optionally, read from a file instead of STDIN", action="store", default=None, dest="file") + parser.add_option("--html", help="HTML Format the email", action="store_true", default=False, dest="html") + parser.add_option("--no-instance-id", help="If set, don't append the instance id", action="store_false", default=True, dest="append_instance_id") + + (options, args) = parser.parse_args() + if options.file: + body = open(options.file, 'r').read() + else: + body = sys.stdin.read() + + if options.html: + notify(options.subject, html_body=body, to_string=options.to, append_instance_id=options.append_instance_id) + else: + notify(options.subject, body=body, to_string=options.to, append_instance_id=options.append_instance_id) diff -r 000000000000 -r 4f3585e2f14b env/bin/python Binary file env/bin/python has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/python-argcomplete-check-easy-install-script --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/python-argcomplete-check-easy-install-script Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,55 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +# Copyright 2012-2019, Andrey Kislyuk and argcomplete contributors. +# Licensed under the Apache License. See https://github.com/kislyuk/argcomplete for more info. + +''' +This script is part of the Python argcomplete package (https://github.com/kislyuk/argcomplete). +It is used to check if an EASY-INSTALL-SCRIPT wrapper redirects to a script that contains the string +"PYTHON_ARGCOMPLETE_OK". If you have enabled global completion in argcomplete, the completion hook will run it every +time you press in your shell. + +Usage: + python-argcomplete-check-easy-install-script +''' + +import sys + +if len(sys.argv) != 2: + sys.exit(__doc__) + +sys.tracebacklimit = 0 + +with open(sys.argv[1]) as fh: + line1, head = fh.read(1024).split("\n", 1)[:2] + if line1.startswith('#') and ('py' in line1 or 'Py' in line1): + import re + lines = head.split("\n", 12) + for line in lines: + if line.startswith("# EASY-INSTALL-SCRIPT"): + import pkg_resources + dist, script = re.match("# EASY-INSTALL-SCRIPT: '(.+)','(.+)'", line).groups() + if "PYTHON_ARGCOMPLETE_OK" in pkg_resources.get_distribution(dist).get_metadata('scripts/' + script): + exit(0) + elif line.startswith("# EASY-INSTALL-ENTRY-SCRIPT"): + dist, group, name = re.match("# EASY-INSTALL-ENTRY-SCRIPT: '(.+)','(.+)','(.+)'", line).groups() + import pkg_resources, pkgutil + module_name = pkg_resources.get_distribution(dist).get_entry_info(group, name).module_name + with open(pkgutil.get_loader(module_name).get_filename()) as mod_fh: + if "PYTHON_ARGCOMPLETE_OK" in mod_fh.read(1024): + exit(0) + elif line.startswith("# EASY-INSTALL-DEV-SCRIPT"): + for line2 in lines: + if line2.startswith('__file__'): + filename = re.match("__file__ = '(.+)'", line2).group(1) + with open(filename) as mod_fh: + if "PYTHON_ARGCOMPLETE_OK" in mod_fh.read(1024): + exit(0) + elif line.startswith("# PBR Generated"): + module = re.search("from (.*) import", head).groups()[0] + import pkg_resources, pkgutil + with open(pkgutil.get_loader(module).get_filename()) as mod_fh: + if "PYTHON_ARGCOMPLETE_OK" in mod_fh.read(1024): + exit(0) + +exit(1) diff -r 000000000000 -r 4f3585e2f14b env/bin/python-argcomplete-tcsh --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/python-argcomplete-tcsh Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,23 @@ +#!/bin/sh +IFS= +export IFS + +COMP_WORDBREAKS= +export COMP_WORDBREAKS + +COMP_TYPE= +export COMP_TYPE + +COMP_LINE=${COMMAND_LINE} +export COMP_LINE + +COMP_POINT=${#COMMAND_LINE} +export COMP_POINT + +_ARGCOMPLETE=1 +export _ARGCOMPLETE + +_ARGCOMPLETE_SHELL=tcsh +export _ARGCOMPLETE_SHELL + +"$1" 8>&1 9>&2 1>/dev/null 2>/dev/null diff -r 000000000000 -r 4f3585e2f14b env/bin/python3 Binary file env/bin/python3 has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/python3.9 Binary file env/bin/python3.9 has changed diff -r 000000000000 -r 4f3585e2f14b env/bin/rdf2dot --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rdf2dot Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from rdflib.tools.rdf2dot import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/rdfgraphisomorphism --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rdfgraphisomorphism Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from rdflib.tools.graphisomorphism import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/rdfpipe --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rdfpipe Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from rdflib.tools.rdfpipe import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/rdfs2dot --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rdfs2dot Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from rdflib.tools.rdfs2dot import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/register-python-argcomplete --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/register-python-argcomplete Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,65 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# PYTHON_ARGCOMPLETE_OK + +# Copyright 2012-2019, Andrey Kislyuk and argcomplete contributors. +# Licensed under the Apache License. See https://github.com/kislyuk/argcomplete for more info. + +''' +Register a Python executable for use with the argcomplete module. + +To perform the registration, source the output of this script in your bash shell +(quote the output to avoid interpolation). + +Example: + + $ eval "$(register-python-argcomplete my-favorite-script.py)" + +For Tcsh + + $ eval `register-python-argcomplete --shell tcsh my-favorite-script.py` + +For Fish + + $ register-python-argcomplete --shell fish my-favourite-script.py > ~/.config/fish/my-favourite-script.py.fish +''' + +import sys +import argparse +import argcomplete + + +parser = argparse.ArgumentParser( + description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter) + +parser.add_argument( + '--no-defaults', + dest='use_defaults', action='store_false', default=True, + help='When no matches are generated, do not fallback to readline\'s default completion') +parser.add_argument( + '--complete-arguments', + nargs=argparse.REMAINDER, + help='arguments to call complete with; use of this option discards default options') +parser.add_argument( + '-s', '--shell', + choices=('bash', 'tcsh', 'fish'), default='bash', + help='output code for the specified shell') +parser.add_argument( + '-e', '--external-argcomplete-script', + help='external argcomplete script for auto completion of the executable') + +parser.add_argument( + 'executable', + nargs='+', + help='executable to completed (when invoked by exactly this name)') + +argcomplete.autocomplete(parser) + +if len(sys.argv) == 1: + parser.print_help() + sys.exit(1) + +args = parser.parse_args() + + +sys.stdout.write(argcomplete.shellcode( + args.executable, args.use_defaults, args.shell, args.complete_arguments, args.external_argcomplete_script)) diff -r 000000000000 -r 4f3585e2f14b env/bin/route53 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/route53 Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,205 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# Author: Chris Moyer +# +# route53 is similar to sdbadmin for Route53, it's a simple +# console utility to perform the most frequent tasks with Route53 +# +# Example usage. Use route53 get after each command to see how the +# zone changes. +# +# Add a non-weighted record, change its value, then delete. Default TTL: +# +# route53 add_record ZPO9LGHZ43QB9 rr.example.com A 4.3.2.1 +# route53 change_record ZPO9LGHZ43QB9 rr.example.com A 9.8.7.6 +# route53 del_record ZPO9LGHZ43QB9 rr.example.com A 9.8.7.6 +# +# Add a weighted record with two different weights. Note that the TTL +# must be specified as route53 uses positional parameters rather than +# option flags: +# +# route53 add_record ZPO9LGHZ43QB9 wrr.example.com A 1.2.3.4 600 foo9 10 +# route53 add_record ZPO9LGHZ43QB9 wrr.example.com A 4.3.2.1 600 foo8 10 +# +# route53 change_record ZPO9LGHZ43QB9 wrr.example.com A 9.9.9.9 600 foo8 10 +# +# route53 del_record ZPO9LGHZ43QB9 wrr.example.com A 1.2.3.4 600 foo9 10 +# route53 del_record ZPO9LGHZ43QB9 wrr.example.com A 9.9.9.9 600 foo8 10 +# +# Add a non-weighted alias, change its value, then delete. Alaises inherit +# their TTLs from the backing ELB: +# +# route53 add_alias ZPO9LGHZ43QB9 alias.example.com A Z3DZXE0Q79N41H lb-1218761514.us-east-1.elb.amazonaws.com. +# route53 change_alias ZPO9LGHZ43QB9 alias.example.com. A Z3DZXE0Q79N41H lb2-1218761514.us-east-1.elb.amazonaws.com. +# route53 delete_alias ZPO9LGHZ43QB9 alias.example.com. A Z3DZXE0Q79N41H lb2-1218761514.us-east-1.elb.amazonaws.com. + +def _print_zone_info(zoneinfo): + print "="*80 + print "| ID: %s" % zoneinfo['Id'].split("/")[-1] + print "| Name: %s" % zoneinfo['Name'] + print "| Ref: %s" % zoneinfo['CallerReference'] + print "="*80 + print zoneinfo['Config'] + print + + +def create(conn, hostname, caller_reference=None, comment=''): + """Create a hosted zone, returning the nameservers""" + response = conn.create_hosted_zone(hostname, caller_reference, comment) + print "Pending, please add the following Name Servers:" + for ns in response.NameServers: + print "\t", ns + +def delete_zone(conn, hosted_zone_id): + """Delete a hosted zone by ID""" + response = conn.delete_hosted_zone(hosted_zone_id) + print response + +def ls(conn): + """List all hosted zones""" + response = conn.get_all_hosted_zones() + for zoneinfo in response['ListHostedZonesResponse']['HostedZones']: + _print_zone_info(zoneinfo) + +def get(conn, hosted_zone_id, type=None, name=None, maxitems=None): + """Get all the records for a single zone""" + response = conn.get_all_rrsets(hosted_zone_id, type, name, maxitems=maxitems) + # If a maximum number of items was set, we limit to that number + # by turning the response into an actual list (copying it) + # instead of allowing it to page + if maxitems: + response = response[:] + print '%-40s %-5s %-20s %s' % ("Name", "Type", "TTL", "Value(s)") + for record in response: + print '%-40s %-5s %-20s %s' % (record.name, record.type, record.ttl, record.to_print()) + +def _add_del(conn, hosted_zone_id, change, name, type, identifier, weight, values, ttl, comment): + from boto.route53.record import ResourceRecordSets + changes = ResourceRecordSets(conn, hosted_zone_id, comment) + change = changes.add_change(change, name, type, ttl, + identifier=identifier, weight=weight) + for value in values.split(','): + change.add_value(value) + print changes.commit() + +def _add_del_alias(conn, hosted_zone_id, change, name, type, identifier, weight, alias_hosted_zone_id, alias_dns_name, comment): + from boto.route53.record import ResourceRecordSets + changes = ResourceRecordSets(conn, hosted_zone_id, comment) + change = changes.add_change(change, name, type, + identifier=identifier, weight=weight) + change.set_alias(alias_hosted_zone_id, alias_dns_name) + print changes.commit() + +def add_record(conn, hosted_zone_id, name, type, values, ttl=600, + identifier=None, weight=None, comment=""): + """Add a new record to a zone. identifier and weight are optional.""" + _add_del(conn, hosted_zone_id, "CREATE", name, type, identifier, + weight, values, ttl, comment) + +def del_record(conn, hosted_zone_id, name, type, values, ttl=600, + identifier=None, weight=None, comment=""): + """Delete a record from a zone: name, type, ttl, identifier, and weight must match.""" + _add_del(conn, hosted_zone_id, "DELETE", name, type, identifier, + weight, values, ttl, comment) + +def add_alias(conn, hosted_zone_id, name, type, alias_hosted_zone_id, + alias_dns_name, identifier=None, weight=None, comment=""): + """Add a new alias to a zone. identifier and weight are optional.""" + _add_del_alias(conn, hosted_zone_id, "CREATE", name, type, identifier, + weight, alias_hosted_zone_id, alias_dns_name, comment) + +def del_alias(conn, hosted_zone_id, name, type, alias_hosted_zone_id, + alias_dns_name, identifier=None, weight=None, comment=""): + """Delete an alias from a zone: name, type, alias_hosted_zone_id, alias_dns_name, weight and identifier must match.""" + _add_del_alias(conn, hosted_zone_id, "DELETE", name, type, identifier, + weight, alias_hosted_zone_id, alias_dns_name, comment) + +def change_record(conn, hosted_zone_id, name, type, newvalues, ttl=600, + identifier=None, weight=None, comment=""): + """Delete and then add a record to a zone. identifier and weight are optional.""" + from boto.route53.record import ResourceRecordSets + changes = ResourceRecordSets(conn, hosted_zone_id, comment) + # Assume there are not more than 10 WRRs for a given (name, type) + responses = conn.get_all_rrsets(hosted_zone_id, type, name, maxitems=10) + for response in responses: + if response.name != name or response.type != type: + continue + if response.identifier != identifier or response.weight != weight: + continue + change1 = changes.add_change("DELETE", name, type, response.ttl, + identifier=response.identifier, + weight=response.weight) + for old_value in response.resource_records: + change1.add_value(old_value) + + change2 = changes.add_change("UPSERT", name, type, ttl, + identifier=identifier, weight=weight) + for new_value in newvalues.split(','): + change2.add_value(new_value) + print changes.commit() + +def change_alias(conn, hosted_zone_id, name, type, new_alias_hosted_zone_id, new_alias_dns_name, identifier=None, weight=None, comment=""): + """Delete and then add an alias to a zone. identifier and weight are optional.""" + from boto.route53.record import ResourceRecordSets + changes = ResourceRecordSets(conn, hosted_zone_id, comment) + # Assume there are not more than 10 WRRs for a given (name, type) + responses = conn.get_all_rrsets(hosted_zone_id, type, name, maxitems=10) + for response in responses: + if response.name != name or response.type != type: + continue + if response.identifier != identifier or response.weight != weight: + continue + change1 = changes.add_change("DELETE", name, type, + identifier=response.identifier, + weight=response.weight) + change1.set_alias(response.alias_hosted_zone_id, response.alias_dns_name) + change2 = changes.add_change("UPSERT", name, type, identifier=identifier, weight=weight) + change2.set_alias(new_alias_hosted_zone_id, new_alias_dns_name) + print changes.commit() + +def help(conn, fnc=None): + """Prints this help message""" + import inspect + self = sys.modules['__main__'] + if fnc: + try: + cmd = getattr(self, fnc) + except: + cmd = None + if not inspect.isfunction(cmd): + print "No function named: %s found" % fnc + sys.exit(2) + (args, varargs, varkw, defaults) = inspect.getargspec(cmd) + print cmd.__doc__ + print "Usage: %s %s" % (fnc, " ".join([ "[%s]" % a for a in args[1:]])) + else: + print "Usage: route53 [command]" + for cname in dir(self): + if not cname.startswith("_"): + cmd = getattr(self, cname) + if inspect.isfunction(cmd): + doc = cmd.__doc__ + print "\t%-20s %s" % (cname, doc) + sys.exit(1) + + +if __name__ == "__main__": + import boto + import sys + conn = boto.connect_route53() + self = sys.modules['__main__'] + if len(sys.argv) >= 2: + try: + cmd = getattr(self, sys.argv[1]) + except: + cmd = None + args = sys.argv[2:] + else: + cmd = help + args = [] + if not cmd: + cmd = help + try: + cmd(conn, *args) + except TypeError as e: + print e + help(conn, cmd.__name__) diff -r 000000000000 -r 4f3585e2f14b env/bin/rst2html.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rst2html.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,23 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +# $Id: rst2html.py 4564 2006-05-21 20:44:42Z wiemann $ +# Author: David Goodger +# Copyright: This module has been placed in the public domain. + +""" +A minimal front end to the Docutils Publisher, producing HTML. +""" + +try: + import locale + locale.setlocale(locale.LC_ALL, '') +except: + pass + +from docutils.core import publish_cmdline, default_description + + +description = ('Generates (X)HTML documents from standalone reStructuredText ' + 'sources. ' + default_description) + +publish_cmdline(writer_name='html', description=description) diff -r 000000000000 -r 4f3585e2f14b env/bin/rst2html4.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rst2html4.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,26 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +# $Id: rst2html4.py 7994 2016-12-10 17:41:45Z milde $ +# Author: David Goodger +# Copyright: This module has been placed in the public domain. + +""" +A minimal front end to the Docutils Publisher, producing (X)HTML. + +The output conforms to XHTML 1.0 transitional +and almost to HTML 4.01 transitional (except for closing empty tags). +""" + +try: + import locale + locale.setlocale(locale.LC_ALL, '') +except: + pass + +from docutils.core import publish_cmdline, default_description + + +description = ('Generates (X)HTML documents from standalone reStructuredText ' + 'sources. ' + default_description) + +publish_cmdline(writer_name='html4', description=description) diff -r 000000000000 -r 4f3585e2f14b env/bin/rst2html5.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rst2html5.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,35 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf8 -*- +# :Copyright: © 2015 Günter Milde. +# :License: Released under the terms of the `2-Clause BSD license`_, in short: +# +# Copying and distribution of this file, with or without modification, +# are permitted in any medium without royalty provided the copyright +# notice and this notice are preserved. +# This file is offered as-is, without any warranty. +# +# .. _2-Clause BSD license: http://www.spdx.org/licenses/BSD-2-Clause +# +# Revision: $Revision: 8410 $ +# Date: $Date: 2019-11-04 22:14:43 +0100 (Mo, 04. Nov 2019) $ + +""" +A minimal front end to the Docutils Publisher, producing HTML 5 documents. + +The output also conforms to XHTML 1.0 transitional +(except for the doctype declaration). +""" + +try: + import locale # module missing in Jython + locale.setlocale(locale.LC_ALL, '') +except locale.Error: + pass + +from docutils.core import publish_cmdline, default_description + +description = (u'Generates HTML 5 documents from standalone ' + u'reStructuredText sources ' + + default_description) + +publish_cmdline(writer_name='html5', description=description) diff -r 000000000000 -r 4f3585e2f14b env/bin/rst2latex.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rst2latex.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,26 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +# $Id: rst2latex.py 5905 2009-04-16 12:04:49Z milde $ +# Author: David Goodger +# Copyright: This module has been placed in the public domain. + +""" +A minimal front end to the Docutils Publisher, producing LaTeX. +""" + +try: + import locale + locale.setlocale(locale.LC_ALL, '') +except: + pass + +from docutils.core import publish_cmdline + +description = ('Generates LaTeX documents from standalone reStructuredText ' + 'sources. ' + 'Reads from (default is stdin) and writes to ' + ' (default is stdout). See ' + ' for ' + 'the full reference.') + +publish_cmdline(writer_name='latex', description=description) diff -r 000000000000 -r 4f3585e2f14b env/bin/rst2man.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rst2man.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,26 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +# Author: +# Contact: grubert@users.sf.net +# Copyright: This module has been placed in the public domain. + +""" +man.py +====== + +This module provides a simple command line interface that uses the +man page writer to output from ReStructuredText source. +""" + +import locale +try: + locale.setlocale(locale.LC_ALL, '') +except: + pass + +from docutils.core import publish_cmdline, default_description +from docutils.writers import manpage + +description = ("Generates plain unix manual documents. " + default_description) + +publish_cmdline(writer=manpage.Writer(), description=description) diff -r 000000000000 -r 4f3585e2f14b env/bin/rst2odt.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rst2odt.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,30 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +# $Id: rst2odt.py 5839 2009-01-07 19:09:28Z dkuhlman $ +# Author: Dave Kuhlman +# Copyright: This module has been placed in the public domain. + +""" +A front end to the Docutils Publisher, producing OpenOffice documents. +""" + +import sys +try: + import locale + locale.setlocale(locale.LC_ALL, '') +except: + pass + +from docutils.core import publish_cmdline_to_binary, default_description +from docutils.writers.odf_odt import Writer, Reader + + +description = ('Generates OpenDocument/OpenOffice/ODF documents from ' + 'standalone reStructuredText sources. ' + default_description) + + +writer = Writer() +reader = Reader() +output = publish_cmdline_to_binary(reader=reader, writer=writer, + description=description) + diff -r 000000000000 -r 4f3585e2f14b env/bin/rst2odt_prepstyles.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rst2odt_prepstyles.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,67 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +# $Id: rst2odt_prepstyles.py 8346 2019-08-26 12:11:32Z milde $ +# Author: Dave Kuhlman +# Copyright: This module has been placed in the public domain. + +""" +Fix a word-processor-generated styles.odt for odtwriter use: Drop page size +specifications from styles.xml in STYLE_FILE.odt. +""" + +# Author: Michael Schutte + +from __future__ import print_function + +from lxml import etree +import sys +import zipfile +from tempfile import mkstemp +import shutil +import os + +NAMESPACES = { + "style": "urn:oasis:names:tc:opendocument:xmlns:style:1.0", + "fo": "urn:oasis:names:tc:opendocument:xmlns:xsl-fo-compatible:1.0" +} + + +def prepstyle(filename): + + zin = zipfile.ZipFile(filename) + styles = zin.read("styles.xml") + + root = etree.fromstring(styles) + for el in root.xpath("//style:page-layout-properties", + namespaces=NAMESPACES): + for attr in el.attrib: + if attr.startswith("{%s}" % NAMESPACES["fo"]): + del el.attrib[attr] + + tempname = mkstemp() + zout = zipfile.ZipFile(os.fdopen(tempname[0], "w"), "w", + zipfile.ZIP_DEFLATED) + + for item in zin.infolist(): + if item.filename == "styles.xml": + zout.writestr(item, etree.tostring(root)) + else: + zout.writestr(item, zin.read(item.filename)) + + zout.close() + zin.close() + shutil.move(tempname[1], filename) + + +def main(): + args = sys.argv[1:] + if len(args) != 1: + print(__doc__, file=sys.stderr) + print("Usage: %s STYLE_FILE.odt\n" % sys.argv[0], file=sys.stderr) + sys.exit(1) + filename = args[0] + prepstyle(filename) + + +if __name__ == '__main__': + main() diff -r 000000000000 -r 4f3585e2f14b env/bin/rst2pseudoxml.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rst2pseudoxml.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,23 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +# $Id: rst2pseudoxml.py 4564 2006-05-21 20:44:42Z wiemann $ +# Author: David Goodger +# Copyright: This module has been placed in the public domain. + +""" +A minimal front end to the Docutils Publisher, producing pseudo-XML. +""" + +try: + import locale + locale.setlocale(locale.LC_ALL, '') +except: + pass + +from docutils.core import publish_cmdline, default_description + + +description = ('Generates pseudo-XML from standalone reStructuredText ' + 'sources (for testing purposes). ' + default_description) + +publish_cmdline(description=description) diff -r 000000000000 -r 4f3585e2f14b env/bin/rst2s5.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rst2s5.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,24 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +# $Id: rst2s5.py 4564 2006-05-21 20:44:42Z wiemann $ +# Author: Chris Liechti +# Copyright: This module has been placed in the public domain. + +""" +A minimal front end to the Docutils Publisher, producing HTML slides using +the S5 template system. +""" + +try: + import locale + locale.setlocale(locale.LC_ALL, '') +except: + pass + +from docutils.core import publish_cmdline, default_description + + +description = ('Generates S5 (X)HTML slideshow documents from standalone ' + 'reStructuredText sources. ' + default_description) + +publish_cmdline(writer_name='s5', description=description) diff -r 000000000000 -r 4f3585e2f14b env/bin/rst2xetex.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rst2xetex.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,27 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +# $Id: rst2xetex.py 7847 2015-03-17 17:30:47Z milde $ +# Author: Guenter Milde +# Copyright: This module has been placed in the public domain. + +""" +A minimal front end to the Docutils Publisher, producing Lua/XeLaTeX code. +""" + +try: + import locale + locale.setlocale(locale.LC_ALL, '') +except: + pass + +from docutils.core import publish_cmdline + +description = ('Generates LaTeX documents from standalone reStructuredText ' + 'sources for compilation with the Unicode-aware TeX variants ' + 'XeLaTeX or LuaLaTeX. ' + 'Reads from (default is stdin) and writes to ' + ' (default is stdout). See ' + ' for ' + 'the full reference.') + +publish_cmdline(writer_name='xetex', description=description) diff -r 000000000000 -r 4f3585e2f14b env/bin/rst2xml.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rst2xml.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,23 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +# $Id: rst2xml.py 4564 2006-05-21 20:44:42Z wiemann $ +# Author: David Goodger +# Copyright: This module has been placed in the public domain. + +""" +A minimal front end to the Docutils Publisher, producing Docutils XML. +""" + +try: + import locale + locale.setlocale(locale.LC_ALL, '') +except: + pass + +from docutils.core import publish_cmdline, default_description + + +description = ('Generates Docutils-native XML from standalone ' + 'reStructuredText sources. ' + default_description) + +publish_cmdline(writer_name='xml', description=description) diff -r 000000000000 -r 4f3585e2f14b env/bin/rstpep2html.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/rstpep2html.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,25 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 + +# $Id: rstpep2html.py 4564 2006-05-21 20:44:42Z wiemann $ +# Author: David Goodger +# Copyright: This module has been placed in the public domain. + +""" +A minimal front end to the Docutils Publisher, producing HTML from PEP +(Python Enhancement Proposal) documents. +""" + +try: + import locale + locale.setlocale(locale.LC_ALL, '') +except: + pass + +from docutils.core import publish_cmdline, default_description + + +description = ('Generates (X)HTML from reStructuredText-format PEP files. ' + + default_description) + +publish_cmdline(reader_name='pep', writer_name='pep_html', + description=description) diff -r 000000000000 -r 4f3585e2f14b env/bin/run-data-managers --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/run-data-managers Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from ephemeris.run_data_managers import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/s3put --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/s3put Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,438 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/ +# +# Permission is hereby granted, free of charge, to any person obtaining a +# copy of this software and associated documentation files (the +# "Software"), to deal in the Software without restriction, including +# without limitation the rights to use, copy, modify, merge, publish, dis- +# tribute, sublicense, and/or sell copies of the Software, and to permit +# persons to whom the Software is furnished to do so, subject to the fol- +# lowing conditions: +# +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- +# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS +# IN THE SOFTWARE. +# +import getopt +import sys +import os +import boto + +from boto.compat import six + +try: + # multipart portions copyright Fabian Topfstedt + # https://gist.github.com/924094 + + import math + import mimetypes + from multiprocessing import Pool + from boto.s3.connection import S3Connection + from filechunkio import FileChunkIO + multipart_capable = True + usage_flag_multipart_capable = """ [--multipart]""" + usage_string_multipart_capable = """ + multipart - Upload files as multiple parts. This needs filechunkio. + Requires ListBucket, ListMultipartUploadParts, + ListBucketMultipartUploads and PutObject permissions.""" +except ImportError as err: + multipart_capable = False + usage_flag_multipart_capable = "" + if six.PY2: + attribute = 'message' + else: + attribute = 'msg' + usage_string_multipart_capable = '\n\n "' + \ + getattr(err, attribute)[len('No module named '):] + \ + '" is missing for multipart support ' + + +DEFAULT_REGION = 'us-east-1' + +usage_string = """ +SYNOPSIS + s3put [-a/--access_key ] [-s/--secret_key ] + -b/--bucket [-c/--callback ] + [-d/--debug ] [-i/--ignore ] + [-n/--no_op] [-p/--prefix ] [-k/--key_prefix ] + [-q/--quiet] [-g/--grant grant] [-w/--no_overwrite] [-r/--reduced] + [--header] [--region ] [--host ]""" + \ + usage_flag_multipart_capable + """ path [path...] + + Where + access_key - Your AWS Access Key ID. If not supplied, boto will + use the value of the environment variable + AWS_ACCESS_KEY_ID + secret_key - Your AWS Secret Access Key. If not supplied, boto + will use the value of the environment variable + AWS_SECRET_ACCESS_KEY + bucket_name - The name of the S3 bucket the file(s) should be + copied to. + path - A path to a directory or file that represents the items + to be uploaded. If the path points to an individual file, + that file will be uploaded to the specified bucket. If the + path points to a directory, it will recursively traverse + the directory and upload all files to the specified bucket. + debug_level - 0 means no debug output (default), 1 means normal + debug output from boto, and 2 means boto debug output + plus request/response output from httplib + ignore_dirs - a comma-separated list of directory names that will + be ignored and not uploaded to S3. + num_cb - The number of progress callbacks to display. The default + is zero which means no callbacks. If you supplied a value + of "-c 10" for example, the progress callback would be + called 10 times for each file transferred. + prefix - A file path prefix that will be stripped from the full + path of the file when determining the key name in S3. + For example, if the full path of a file is: + /home/foo/bar/fie.baz + and the prefix is specified as "-p /home/foo/" the + resulting key name in S3 will be: + /bar/fie.baz + The prefix must end in a trailing separator and if it + does not then one will be added. + key_prefix - A prefix to be added to the S3 key name, after any + stripping of the file path is done based on the + "-p/--prefix" option. + reduced - Use Reduced Redundancy storage + grant - A canned ACL policy that will be granted on each file + transferred to S3. The value of provided must be one + of the "canned" ACL policies supported by S3: + private|public-read|public-read-write|authenticated-read + no_overwrite - No files will be overwritten on S3, if the file/key + exists on s3 it will be kept. This is useful for + resuming interrupted transfers. Note this is not a + sync, even if the file has been updated locally if + the key exists on s3 the file on s3 will not be + updated. + header - key=value pairs of extra header(s) to pass along in the + request + region - Manually set a region for buckets that are not in the US + classic region. Normally the region is autodetected, but + setting this yourself is more efficient. + host - Hostname override, for using an endpoint other then AWS S3 +""" + usage_string_multipart_capable + """ + + + If the -n option is provided, no files will be transferred to S3 but + informational messages will be printed about what would happen. +""" + + +def usage(status=1): + print(usage_string) + sys.exit(status) + + +def submit_cb(bytes_so_far, total_bytes): + print('%d bytes transferred / %d bytes total' % (bytes_so_far, total_bytes)) + + +def get_key_name(fullpath, prefix, key_prefix): + if fullpath.startswith(prefix): + key_name = fullpath[len(prefix):] + else: + key_name = fullpath + l = key_name.split(os.sep) + return key_prefix + '/'.join(l) + + +def _upload_part(bucketname, aws_key, aws_secret, multipart_id, part_num, + source_path, offset, bytes, debug, cb, num_cb, + amount_of_retries=10): + """ + Uploads a part with retries. + """ + if debug == 1: + print("_upload_part(%s, %s, %s)" % (source_path, offset, bytes)) + + def _upload(retries_left=amount_of_retries): + try: + if debug == 1: + print('Start uploading part #%d ...' % part_num) + conn = S3Connection(aws_key, aws_secret) + conn.debug = debug + bucket = conn.get_bucket(bucketname) + for mp in bucket.get_all_multipart_uploads(): + if mp.id == multipart_id: + with FileChunkIO(source_path, 'r', offset=offset, + bytes=bytes) as fp: + mp.upload_part_from_file(fp=fp, part_num=part_num, + cb=cb, num_cb=num_cb) + break + except Exception as exc: + if retries_left: + _upload(retries_left=retries_left - 1) + else: + print('Failed uploading part #%d' % part_num) + raise exc + else: + if debug == 1: + print('... Uploaded part #%d' % part_num) + + _upload() + +def check_valid_region(conn, region): + if conn is None: + print('Invalid region (%s)' % region) + sys.exit(1) + +def multipart_upload(bucketname, aws_key, aws_secret, source_path, keyname, + reduced, debug, cb, num_cb, acl='private', headers={}, + guess_mimetype=True, parallel_processes=4, + region=DEFAULT_REGION): + """ + Parallel multipart upload. + """ + conn = boto.s3.connect_to_region(region, aws_access_key_id=aws_key, + aws_secret_access_key=aws_secret) + check_valid_region(conn, region) + conn.debug = debug + bucket = conn.get_bucket(bucketname) + + if guess_mimetype: + mtype = mimetypes.guess_type(keyname)[0] or 'application/octet-stream' + headers.update({'Content-Type': mtype}) + + mp = bucket.initiate_multipart_upload(keyname, headers=headers, + reduced_redundancy=reduced) + + source_size = os.stat(source_path).st_size + bytes_per_chunk = max(int(math.sqrt(5242880) * math.sqrt(source_size)), + 5242880) + chunk_amount = int(math.ceil(source_size / float(bytes_per_chunk))) + + pool = Pool(processes=parallel_processes) + for i in range(chunk_amount): + offset = i * bytes_per_chunk + remaining_bytes = source_size - offset + bytes = min([bytes_per_chunk, remaining_bytes]) + part_num = i + 1 + pool.apply_async(_upload_part, [bucketname, aws_key, aws_secret, mp.id, + part_num, source_path, offset, bytes, + debug, cb, num_cb]) + pool.close() + pool.join() + + if len(mp.get_all_parts()) == chunk_amount: + mp.complete_upload() + key = bucket.get_key(keyname) + key.set_acl(acl) + else: + mp.cancel_upload() + + +def singlepart_upload(bucket, key_name, fullpath, *kargs, **kwargs): + """ + Single upload. + """ + k = bucket.new_key(key_name) + k.set_contents_from_filename(fullpath, *kargs, **kwargs) + + +def expand_path(path): + path = os.path.expanduser(path) + path = os.path.expandvars(path) + return os.path.abspath(path) + + +def main(): + + # default values + aws_access_key_id = None + aws_secret_access_key = None + bucket_name = '' + ignore_dirs = [] + debug = 0 + cb = None + num_cb = 0 + quiet = False + no_op = False + prefix = '/' + key_prefix = '' + grant = None + no_overwrite = False + reduced = False + headers = {} + host = None + multipart_requested = False + region = None + + try: + opts, args = getopt.getopt( + sys.argv[1:], 'a:b:c::d:g:hi:k:np:qs:wr', + ['access_key=', 'bucket=', 'callback=', 'debug=', 'help', 'grant=', + 'ignore=', 'key_prefix=', 'no_op', 'prefix=', 'quiet', + 'secret_key=', 'no_overwrite', 'reduced', 'header=', 'multipart', + 'host=', 'region=']) + except: + usage(1) + + # parse opts + for o, a in opts: + if o in ('-h', '--help'): + usage(0) + if o in ('-a', '--access_key'): + aws_access_key_id = a + if o in ('-b', '--bucket'): + bucket_name = a + if o in ('-c', '--callback'): + num_cb = int(a) + cb = submit_cb + if o in ('-d', '--debug'): + debug = int(a) + if o in ('-g', '--grant'): + grant = a + if o in ('-i', '--ignore'): + ignore_dirs = a.split(',') + if o in ('-n', '--no_op'): + no_op = True + if o in ('-w', '--no_overwrite'): + no_overwrite = True + if o in ('-p', '--prefix'): + prefix = a + if prefix[-1] != os.sep: + prefix = prefix + os.sep + prefix = expand_path(prefix) + if o in ('-k', '--key_prefix'): + key_prefix = a + if o in ('-q', '--quiet'): + quiet = True + if o in ('-s', '--secret_key'): + aws_secret_access_key = a + if o in ('-r', '--reduced'): + reduced = True + if o == '--header': + (k, v) = a.split("=", 1) + headers[k] = v + if o == '--host': + host = a + if o == '--multipart': + if multipart_capable: + multipart_requested = True + else: + print("multipart upload requested but not capable") + sys.exit(4) + if o == '--region': + regions = boto.s3.regions() + for region_info in regions: + if region_info.name == a: + region = a + break + else: + raise ValueError('Invalid region %s specified' % a) + + if len(args) < 1: + usage(2) + + if not bucket_name: + print("bucket name is required!") + usage(3) + + connect_args = { + 'aws_access_key_id': aws_access_key_id, + 'aws_secret_access_key': aws_secret_access_key + } + + if host: + connect_args['host'] = host + + c = boto.s3.connect_to_region(region or DEFAULT_REGION, **connect_args) + check_valid_region(c, region or DEFAULT_REGION) + c.debug = debug + b = c.get_bucket(bucket_name, validate=False) + + # Attempt to determine location and warn if no --host or --region + # arguments were passed. Then try to automagically figure out + # what should have been passed and fix it. + if host is None and region is None: + try: + location = b.get_location() + + # Classic region will be '', any other will have a name + if location: + print('Bucket exists in %s but no host or region given!' % location) + + # Override for EU, which is really Ireland according to the docs + if location == 'EU': + location = 'eu-west-1' + + print('Automatically setting region to %s' % location) + + # Here we create a new connection, and then take the existing + # bucket and set it to use the new connection + c = boto.s3.connect_to_region(location, **connect_args) + c.debug = debug + b.connection = c + except Exception as e: + if debug > 0: + print(e) + print('Could not get bucket region info, skipping...') + + existing_keys_to_check_against = [] + files_to_check_for_upload = [] + + for path in args: + path = expand_path(path) + # upload a directory of files recursively + if os.path.isdir(path): + if no_overwrite: + if not quiet: + print('Getting list of existing keys to check against') + for key in b.list(get_key_name(path, prefix, key_prefix)): + existing_keys_to_check_against.append(key.name) + for root, dirs, files in os.walk(path): + for ignore in ignore_dirs: + if ignore in dirs: + dirs.remove(ignore) + for path in files: + if path.startswith("."): + continue + files_to_check_for_upload.append(os.path.join(root, path)) + + # upload a single file + elif os.path.isfile(path): + fullpath = os.path.abspath(path) + key_name = get_key_name(fullpath, prefix, key_prefix) + files_to_check_for_upload.append(fullpath) + existing_keys_to_check_against.append(key_name) + + # we are trying to upload something unknown + else: + print("I don't know what %s is, so i can't upload it" % path) + + for fullpath in files_to_check_for_upload: + key_name = get_key_name(fullpath, prefix, key_prefix) + + if no_overwrite and key_name in existing_keys_to_check_against: + if b.get_key(key_name): + if not quiet: + print('Skipping %s as it exists in s3' % fullpath) + continue + + if not quiet: + print('Copying %s to %s/%s' % (fullpath, bucket_name, key_name)) + + if not no_op: + # 0-byte files don't work and also don't need multipart upload + if os.stat(fullpath).st_size != 0 and multipart_capable and \ + multipart_requested: + multipart_upload(bucket_name, aws_access_key_id, + aws_secret_access_key, fullpath, key_name, + reduced, debug, cb, num_cb, + grant or 'private', headers, + region=region or DEFAULT_REGION) + else: + singlepart_upload(b, key_name, fullpath, cb=cb, num_cb=num_cb, + policy=grant, reduced_redundancy=reduced, + headers=headers) + +if __name__ == "__main__": + main() diff -r 000000000000 -r 4f3585e2f14b env/bin/schema-salad-doc --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/schema-salad-doc Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from schema_salad.makedoc import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/schema-salad-tool --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/schema-salad-tool Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from schema_salad.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/sdbadmin --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/sdbadmin Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,194 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# Copyright (c) 2009 Chris Moyer http://kopertop.blogspot.com/ +# +# Permission is hereby granted, free of charge, to any person obtaining a +# copy of this software and associated documentation files (the +# "Software"), to deal in the Software without restriction, including +# without limitation the rights to use, copy, modify, merge, publish, dis- +# tribute, sublicense, and/or sell copies of the Software, and to permit +# persons to whom the Software is furnished to do so, subject to the fol- +# lowing conditions: +# +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- +# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + +# +# Tools to dump and recover an SDB domain +# +VERSION = "%prog version 1.0" +import boto +import time +from boto import sdb +from boto.compat import json + +def choice_input(options, default=None, title=None): + """ + Choice input + """ + if title == None: + title = "Please choose" + print title + objects = [] + for n, obj in enumerate(options): + print "%s: %s" % (n, obj) + objects.append(obj) + choice = int(raw_input(">>> ")) + try: + choice = objects[choice] + except: + choice = default + return choice + +def confirm(message="Are you sure?"): + choice = raw_input("%s [yN] " % message) + return choice and len(choice) > 0 and choice[0].lower() == "y" + + +def dump_db(domain, file_name, use_json=False, sort_attributes=False): + """ + Dump SDB domain to file + """ + f = open(file_name, "w") + if use_json: + for item in domain: + data = {"name": item.name, "attributes": item} + print >> f, json.dumps(data, sort_keys=sort_attributes) + else: + doc = domain.to_xml(f) + +def empty_db(domain): + """ + Remove all entries from domain + """ + for item in domain: + item.delete() + +def load_db(domain, file, use_json=False): + """ + Load a domain from a file, this doesn't overwrite any existing + data in the file so if you want to do a full recovery and restore + you need to call empty_db before calling this + + :param domain: The SDB Domain object to load to + :param file: The File to load the DB from + """ + if use_json: + for line in file.readlines(): + if line: + data = json.loads(line) + item = domain.new_item(data['name']) + item.update(data['attributes']) + item.save() + + else: + domain.from_xml(file) + +def check_valid_region(conn, region): + if conn is None: + print 'Invalid region (%s)' % region + sys.exit(1) + +def create_db(domain_name, region_name): + """Create a new DB + + :param domain: Name of the domain to create + :type domain: str + """ + sdb = boto.sdb.connect_to_region(region_name) + check_valid_region(sdb, region_name) + return sdb.create_domain(domain_name) + +if __name__ == "__main__": + from optparse import OptionParser + parser = OptionParser(version=VERSION, usage="Usage: %prog [--dump|--load|--empty|--list|-l] [options]") + + # Commands + parser.add_option("--dump", help="Dump domain to file", dest="dump", default=False, action="store_true") + parser.add_option("--load", help="Load domain contents from file", dest="load", default=False, action="store_true") + parser.add_option("--empty", help="Empty all contents of domain", dest="empty", default=False, action="store_true") + parser.add_option("-l", "--list", help="List All domains", dest="list", default=False, action="store_true") + parser.add_option("-c", "--create", help="Create domain", dest="create", default=False, action="store_true") + + parser.add_option("-a", "--all-domains", help="Operate on all domains", action="store_true", default=False, dest="all_domains") + if json: + parser.add_option("-j", "--use-json", help="Load/Store as JSON instead of XML", action="store_true", default=False, dest="json") + parser.add_option("-s", "--sort-attibutes", help="Sort the element attributes", action="store_true", default=False, dest="sort_attributes") + parser.add_option("-d", "--domain", help="Do functions on domain (may be more then one)", action="append", dest="domains") + parser.add_option("-f", "--file", help="Input/Output file we're operating on", dest="file_name") + parser.add_option("-r", "--region", help="Region (e.g. us-east-1[default] or eu-west-1)", default="us-east-1", dest="region_name") + (options, args) = parser.parse_args() + + if options.create: + for domain_name in options.domains: + create_db(domain_name, options.region_name) + exit() + + sdb = boto.sdb.connect_to_region(options.region_name) + check_valid_region(sdb, options.region_name) + if options.list: + for db in sdb.get_all_domains(): + print db + exit() + + if not options.dump and not options.load and not options.empty: + parser.print_help() + exit() + + + + + # + # Setup + # + if options.domains: + domains = [] + for domain_name in options.domains: + domains.append(sdb.get_domain(domain_name)) + elif options.all_domains: + domains = sdb.get_all_domains() + else: + domains = [choice_input(options=sdb.get_all_domains(), title="No domain specified, please choose one")] + + + # + # Execute the commands + # + stime = time.time() + if options.empty: + if confirm("WARNING!!! Are you sure you want to empty the following domains?: %s" % domains): + stime = time.time() + for domain in domains: + print "--------> Emptying %s <--------" % domain.name + empty_db(domain) + else: + print "Canceling operations" + exit() + + if options.dump: + for domain in domains: + print "--------> Dumping %s <---------" % domain.name + if options.file_name: + file_name = options.file_name + else: + file_name = "%s.db" % domain.name + dump_db(domain, file_name, options.json, options.sort_attributes) + + if options.load: + for domain in domains: + print "---------> Loading %s <----------" % domain.name + if options.file_name: + file_name = options.file_name + else: + file_name = "%s.db" % domain.name + load_db(domain, open(file_name, "rb"), options.json) + + + total_time = round(time.time() - stime, 2) + print "--------> Finished in %s <--------" % total_time diff -r 000000000000 -r 4f3585e2f14b env/bin/setup-data-libraries --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/setup-data-libraries Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from ephemeris.setup_data_libraries import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/shed-tools --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/shed-tools Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from ephemeris.shed_tools import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/tabulate --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/tabulate Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from tabulate import _main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(_main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/taskadmin --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/taskadmin Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,116 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# Copyright (c) 2009 Chris Moyer http://coredumped.org/ +# +# Permission is hereby granted, free of charge, to any person obtaining a +# copy of this software and associated documentation files (the +# "Software"), to deal in the Software without restriction, including +# without limitation the rights to use, copy, modify, merge, publish, dis- +# tribute, sublicense, and/or sell copies of the Software, and to permit +# persons to whom the Software is furnished to do so, subject to the fol- +# lowing conditions: +# +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- +# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + +# +# Task/Job Administration utility +# +VERSION="0.1" +__version__ = VERSION +usage = """%prog [options] [command] +Commands: + list|ls List all Tasks in SDB + delete Delete Task with id + get Get Task + create|mk Create a new Task with command running every +""" + +def list(): + """List all Tasks in SDB""" + from boto.manage.task import Task + print "%-8s %-40s %s" % ("Hour", "Name", "Command") + print "-"*100 + for t in Task.all(): + print "%-8s %-40s %s" % (t.hour, t.name, t.command) + +def get(name): + """Get a task + :param name: The name of the task to fetch + :type name: str + """ + from boto.manage.task import Task + q = Task.find() + q.filter("name like", "%s%%" % name) + for t in q: + print "="*80 + print "| ", t.id + print "|%s" % ("-"*79) + print "| Name: ", t.name + print "| Hour: ", t.hour + print "| Command: ", t.command + if t.last_executed: + print "| Last Run: ", t.last_executed.ctime() + print "| Last Status: ", t.last_status + print "| Last Run Log: ", t.last_output + print "="*80 + +def delete(id): + from boto.manage.task import Task + t = Task.get_by_id(id) + print "Deleting task: %s" % t.name + if raw_input("Are you sure? ").lower() in ["y", "yes"]: + t.delete() + print "Deleted" + else: + print "Canceled" + +def create(name, hour, command): + """Create a new task + :param name: Name of the task to create + :type name: str + :param hour: What hour to run it at, "*" for every hour + :type hour: str + :param command: The command to execute + :type command: str + """ + from boto.manage.task import Task + t = Task() + t.name = name + t.hour = hour + t.command = command + t.put() + print "Created task: %s" % t.id + +if __name__ == "__main__": + try: + import readline + except ImportError: + pass + import boto + import sys + from optparse import OptionParser + from boto.mashups.iobject import IObject + parser = OptionParser(version=__version__, usage=usage) + + (options, args) = parser.parse_args() + + if len(args) < 1: + parser.print_help() + sys.exit(1) + + command = args[0].lower() + if command in ("ls", "list"): + list() + elif command == "get": + get(args[1]) + elif command == "create": + create(args[1], args[2], args[3]) + elif command == "delete": + delete(args[1]) diff -r 000000000000 -r 4f3585e2f14b env/bin/virtualenv --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/virtualenv Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from virtualenv.__main__ import run_with_catch +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(run_with_catch()) diff -r 000000000000 -r 4f3585e2f14b env/bin/workflow-install --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/workflow-install Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from ephemeris.workflow_install import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/bin/workflow-to-tools --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/bin/workflow-to-tools Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +#!/Users/cmdms/OneDrive-UOB/Development/Projects/2021/sam-consensus-v3/env/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from ephemeris.generate_tool_list_from_ga_workflow_files import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/CacheControl-0.11.7-py3.9.egg-info/PKG-INFO --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/CacheControl-0.11.7-py3.9.egg-info/PKG-INFO Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,65 @@ +Metadata-Version: 2.1 +Name: CacheControl +Version: 0.11.7 +Summary: httplib2 caching for requests +Home-page: https://github.com/ionrock/cachecontrol +Author: Eric Larson +Author-email: eric@ionrock.org +License: UNKNOWN +Description: ============== + CacheControl + ============== + + .. image:: https://img.shields.io/pypi/v/cachecontrol.svg + :target: https://pypi.python.org/pypi/cachecontrol + :alt: Latest Version + + .. image:: https://travis-ci.org/ionrock/cachecontrol.png?branch=master + :target: https://travis-ci.org/ionrock/cachecontrol + + CacheControl is a port of the caching algorithms in httplib2_ for use with + requests_ session object. + + It was written because httplib2's better support for caching is often + mitigated by its lack of threadsafety. The same is true of requests in + terms of caching. + + + Quickstart + ========== + + .. code-block:: python + + import requests + + from cachecontrol import CacheControl + + + sess = requests.session() + cached_sess = CacheControl(sess) + + response = cached_sess.get('http://google.com') + + If the URL contains any caching based headers, it will cache the + result in a simple dictionary. + + For more info, check out the docs_ + + .. _docs: http://cachecontrol.readthedocs.org/en/latest/ + .. _httplib2: https://github.com/jcgregorio/httplib2 + .. _requests: http://docs.python-requests.org/ + +Keywords: requests http caching web +Platform: UNKNOWN +Classifier: Development Status :: 4 - Beta +Classifier: Environment :: Web Environment +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python :: 2.6 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.2 +Classifier: Programming Language :: Python :: 3.3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Topic :: Internet :: WWW/HTTP +Provides-Extra: filecache diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/CacheControl-0.11.7-py3.9.egg-info/SOURCES.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/CacheControl-0.11.7-py3.9.egg-info/SOURCES.txt Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,24 @@ +LICENSE.txt +MANIFEST.in +README.rst +setup.cfg +setup.py +CacheControl.egg-info/PKG-INFO +CacheControl.egg-info/SOURCES.txt +CacheControl.egg-info/dependency_links.txt +CacheControl.egg-info/entry_points.txt +CacheControl.egg-info/requires.txt +CacheControl.egg-info/top_level.txt +cachecontrol/__init__.py +cachecontrol/_cmd.py +cachecontrol/adapter.py +cachecontrol/cache.py +cachecontrol/compat.py +cachecontrol/controller.py +cachecontrol/filewrapper.py +cachecontrol/heuristics.py +cachecontrol/serialize.py +cachecontrol/wrapper.py +cachecontrol/caches/__init__.py +cachecontrol/caches/file_cache.py +cachecontrol/caches/redis_cache.py \ No newline at end of file diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/CacheControl-0.11.7-py3.9.egg-info/dependency_links.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/CacheControl-0.11.7-py3.9.egg-info/dependency_links.txt Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,1 @@ + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/CacheControl-0.11.7-py3.9.egg-info/entry_points.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/CacheControl-0.11.7-py3.9.egg-info/entry_points.txt Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,3 @@ +[console_scripts] +doesitcache = cachecontrol._cmd:main + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/CacheControl-0.11.7-py3.9.egg-info/installed-files.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/CacheControl-0.11.7-py3.9.egg-info/installed-files.txt Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,33 @@ +../../../../bin/doesitcache +../cachecontrol/__init__.py +../cachecontrol/__pycache__/__init__.cpython-39.pyc +../cachecontrol/__pycache__/_cmd.cpython-39.pyc +../cachecontrol/__pycache__/adapter.cpython-39.pyc +../cachecontrol/__pycache__/cache.cpython-39.pyc +../cachecontrol/__pycache__/compat.cpython-39.pyc +../cachecontrol/__pycache__/controller.cpython-39.pyc +../cachecontrol/__pycache__/filewrapper.cpython-39.pyc +../cachecontrol/__pycache__/heuristics.cpython-39.pyc +../cachecontrol/__pycache__/serialize.cpython-39.pyc +../cachecontrol/__pycache__/wrapper.cpython-39.pyc +../cachecontrol/_cmd.py +../cachecontrol/adapter.py +../cachecontrol/cache.py +../cachecontrol/caches/__init__.py +../cachecontrol/caches/__pycache__/__init__.cpython-39.pyc +../cachecontrol/caches/__pycache__/file_cache.cpython-39.pyc +../cachecontrol/caches/__pycache__/redis_cache.cpython-39.pyc +../cachecontrol/caches/file_cache.py +../cachecontrol/caches/redis_cache.py +../cachecontrol/compat.py +../cachecontrol/controller.py +../cachecontrol/filewrapper.py +../cachecontrol/heuristics.py +../cachecontrol/serialize.py +../cachecontrol/wrapper.py +PKG-INFO +SOURCES.txt +dependency_links.txt +entry_points.txt +requires.txt +top_level.txt diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/CacheControl-0.11.7-py3.9.egg-info/requires.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/CacheControl-0.11.7-py3.9.egg-info/requires.txt Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,4 @@ +requests + +[filecache] +lockfile>=0.9 diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/CacheControl-0.11.7-py3.9.egg-info/top_level.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/CacheControl-0.11.7-py3.9.egg-info/top_level.txt Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,1 @@ +cachecontrol diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/Jinja2-2.11.3.dist-info/INSTALLER --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/Jinja2-2.11.3.dist-info/INSTALLER Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,1 @@ +pip diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/Jinja2-2.11.3.dist-info/LICENSE.rst --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/Jinja2-2.11.3.dist-info/LICENSE.rst Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,28 @@ +Copyright 2007 Pallets + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/Jinja2-2.11.3.dist-info/METADATA --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/Jinja2-2.11.3.dist-info/METADATA Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,106 @@ +Metadata-Version: 2.1 +Name: Jinja2 +Version: 2.11.3 +Summary: A very fast and expressive template engine. +Home-page: https://palletsprojects.com/p/jinja/ +Author: Armin Ronacher +Author-email: armin.ronacher@active-4.com +Maintainer: Pallets +Maintainer-email: contact@palletsprojects.com +License: BSD-3-Clause +Project-URL: Documentation, https://jinja.palletsprojects.com/ +Project-URL: Code, https://github.com/pallets/jinja +Project-URL: Issue tracker, https://github.com/pallets/jinja/issues +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Web Environment +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: Text Processing :: Markup :: HTML +Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.* +Description-Content-Type: text/x-rst +Requires-Dist: MarkupSafe (>=0.23) +Provides-Extra: i18n +Requires-Dist: Babel (>=0.8) ; extra == 'i18n' + +Jinja +===== + +Jinja is a fast, expressive, extensible templating engine. Special +placeholders in the template allow writing code similar to Python +syntax. Then the template is passed data to render the final document. + +It includes: + +- Template inheritance and inclusion. +- Define and import macros within templates. +- HTML templates can use autoescaping to prevent XSS from untrusted + user input. +- A sandboxed environment can safely render untrusted templates. +- AsyncIO support for generating templates and calling async + functions. +- I18N support with Babel. +- Templates are compiled to optimized Python code just-in-time and + cached, or can be compiled ahead-of-time. +- Exceptions point to the correct line in templates to make debugging + easier. +- Extensible filters, tests, functions, and even syntax. + +Jinja's philosophy is that while application logic belongs in Python if +possible, it shouldn't make the template designer's job difficult by +restricting functionality too much. + + +Installing +---------- + +Install and update using `pip`_: + +.. code-block:: text + + $ pip install -U Jinja2 + +.. _pip: https://pip.pypa.io/en/stable/quickstart/ + + +In A Nutshell +------------- + +.. code-block:: jinja + + {% extends "base.html" %} + {% block title %}Members{% endblock %} + {% block content %} + + {% endblock %} + + +Links +----- + +- Website: https://palletsprojects.com/p/jinja/ +- Documentation: https://jinja.palletsprojects.com/ +- Releases: https://pypi.org/project/Jinja2/ +- Code: https://github.com/pallets/jinja +- Issue tracker: https://github.com/pallets/jinja/issues +- Test status: https://dev.azure.com/pallets/jinja/_build +- Official chat: https://discord.gg/t6rrQZH + + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/Jinja2-2.11.3.dist-info/RECORD --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/Jinja2-2.11.3.dist-info/RECORD Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,61 @@ +Jinja2-2.11.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +Jinja2-2.11.3.dist-info/LICENSE.rst,sha256=O0nc7kEF6ze6wQ-vG-JgQI_oXSUrjp3y4JefweCUQ3s,1475 +Jinja2-2.11.3.dist-info/METADATA,sha256=PscpJ1C3RSp8xcjV3fAuTz13rKbGxmzJXnMQFH-WKhs,3535 +Jinja2-2.11.3.dist-info/RECORD,, +Jinja2-2.11.3.dist-info/WHEEL,sha256=Z-nyYpwrcSqxfdux5Mbn_DQ525iP7J2DG3JgGvOYyTQ,110 +Jinja2-2.11.3.dist-info/entry_points.txt,sha256=Qy_DkVo6Xj_zzOtmErrATe8lHZhOqdjpt3e4JJAGyi8,61 +Jinja2-2.11.3.dist-info/top_level.txt,sha256=PkeVWtLb3-CqjWi1fO29OCbj55EhX_chhKrCdrVe_zs,7 +jinja2/__init__.py,sha256=LZUXmxJc2GIchfSAeMWsxCWiQYO-w1-736f2Q3I8ms8,1549 +jinja2/__pycache__/__init__.cpython-39.pyc,, +jinja2/__pycache__/_compat.cpython-39.pyc,, +jinja2/__pycache__/_identifier.cpython-39.pyc,, +jinja2/__pycache__/asyncfilters.cpython-39.pyc,, +jinja2/__pycache__/asyncsupport.cpython-39.pyc,, +jinja2/__pycache__/bccache.cpython-39.pyc,, +jinja2/__pycache__/compiler.cpython-39.pyc,, +jinja2/__pycache__/constants.cpython-39.pyc,, +jinja2/__pycache__/debug.cpython-39.pyc,, +jinja2/__pycache__/defaults.cpython-39.pyc,, +jinja2/__pycache__/environment.cpython-39.pyc,, +jinja2/__pycache__/exceptions.cpython-39.pyc,, +jinja2/__pycache__/ext.cpython-39.pyc,, +jinja2/__pycache__/filters.cpython-39.pyc,, +jinja2/__pycache__/idtracking.cpython-39.pyc,, +jinja2/__pycache__/lexer.cpython-39.pyc,, +jinja2/__pycache__/loaders.cpython-39.pyc,, +jinja2/__pycache__/meta.cpython-39.pyc,, +jinja2/__pycache__/nativetypes.cpython-39.pyc,, +jinja2/__pycache__/nodes.cpython-39.pyc,, +jinja2/__pycache__/optimizer.cpython-39.pyc,, +jinja2/__pycache__/parser.cpython-39.pyc,, +jinja2/__pycache__/runtime.cpython-39.pyc,, +jinja2/__pycache__/sandbox.cpython-39.pyc,, +jinja2/__pycache__/tests.cpython-39.pyc,, +jinja2/__pycache__/utils.cpython-39.pyc,, +jinja2/__pycache__/visitor.cpython-39.pyc,, +jinja2/_compat.py,sha256=B6Se8HjnXVpzz9-vfHejn-DV2NjaVK-Iewupc5kKlu8,3191 +jinja2/_identifier.py,sha256=EdgGJKi7O1yvr4yFlvqPNEqV6M1qHyQr8Gt8GmVTKVM,1775 +jinja2/asyncfilters.py,sha256=XJtYXTxFvcJ5xwk6SaDL4S0oNnT0wPYvXBCSzc482fI,4250 +jinja2/asyncsupport.py,sha256=ZBFsDLuq3Gtji3Ia87lcyuDbqaHZJRdtShZcqwpFnSQ,7209 +jinja2/bccache.py,sha256=3Pmp4jo65M9FQuIxdxoDBbEDFwe4acDMQf77nEJfrHA,12139 +jinja2/compiler.py,sha256=Ta9W1Lit542wItAHXlDcg0sEOsFDMirCdlFPHAurg4o,66284 +jinja2/constants.py,sha256=RR1sTzNzUmKco6aZicw4JpQpJGCuPuqm1h1YmCNUEFY,1458 +jinja2/debug.py,sha256=neR7GIGGjZH3_ILJGVUYy3eLQCCaWJMXOb7o0kGInWc,8529 +jinja2/defaults.py,sha256=85B6YUUCyWPSdrSeVhcqFVuu_bHUAQXeey--FIwSeVQ,1126 +jinja2/environment.py,sha256=XDSLKc4SqNLMOwTSq3TbWEyA5WyXfuLuVD0wAVjEFwM,50629 +jinja2/exceptions.py,sha256=VjNLawcmf2ODffqVMCQK1cRmvFaUfQWF4u8ouP3QPcE,5425 +jinja2/ext.py,sha256=AtwL5O5enT_L3HR9-oBvhGyUTdGoyaqG_ICtnR_EVd4,26441 +jinja2/filters.py,sha256=9ORilsZrUoydSI9upz8_qGy7gozDWLYoFmlIBFSVRnQ,41439 +jinja2/idtracking.py,sha256=J3O4VHsrbf3wzwiBc7Cro26kHb6_5kbULeIOzocchIU,9211 +jinja2/lexer.py,sha256=nUFLRKhhKmmEWkLI65nQePgcQs7qsRdjVYZETMt_v0g,30331 +jinja2/loaders.py,sha256=C-fST_dmFjgWkp0ZuCkrgICAoOsoSIF28wfAFink0oU,17666 +jinja2/meta.py,sha256=QjyYhfNRD3QCXjBJpiPl9KgkEkGXJbAkCUq4-Ur10EQ,4131 +jinja2/nativetypes.py,sha256=Ul__gtVw4xH-0qvUvnCNHedQeNDwmEuyLJztzzSPeRg,2753 +jinja2/nodes.py,sha256=Mk1oJPVgIjnQw9WOqILvcu3rLepcFZ0ahxQm2mbwDwc,31095 +jinja2/optimizer.py,sha256=gQLlMYzvQhluhzmAIFA1tXS0cwgWYOjprN-gTRcHVsc,1457 +jinja2/parser.py,sha256=fcfdqePNTNyvosIvczbytVA332qpsURvYnCGcjDHSkA,35660 +jinja2/runtime.py,sha256=0y-BRyIEZ9ltByL2Id6GpHe1oDRQAwNeQvI0SKobNMw,30618 +jinja2/sandbox.py,sha256=knayyUvXsZ-F0mk15mO2-ehK9gsw04UhB8td-iUOtLc,17127 +jinja2/tests.py,sha256=iO_Y-9Vo60zrVe1lMpSl5sKHqAxe2leZHC08OoZ8K24,4799 +jinja2/utils.py,sha256=Wy4yC3IByqUWwnKln6SdaixdzgK74P6F5nf-gQZrYnU,22436 +jinja2/visitor.py,sha256=DUHupl0a4PGp7nxRtZFttUzAi1ccxzqc2hzetPYUz8U,3240 diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/Jinja2-2.11.3.dist-info/WHEEL --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/Jinja2-2.11.3.dist-info/WHEEL Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.36.2) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/Jinja2-2.11.3.dist-info/entry_points.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/Jinja2-2.11.3.dist-info/entry_points.txt Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,3 @@ +[babel.extractors] +jinja2 = jinja2.ext:babel_extract [i18n] + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/Jinja2-2.11.3.dist-info/top_level.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/Jinja2-2.11.3.dist-info/top_level.txt Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,1 @@ +jinja2 diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/MarkupSafe-1.1.1.dist-info/INSTALLER --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/MarkupSafe-1.1.1.dist-info/INSTALLER Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,1 @@ +pip diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/MarkupSafe-1.1.1.dist-info/LICENSE.rst --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/MarkupSafe-1.1.1.dist-info/LICENSE.rst Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,28 @@ +Copyright 2010 Pallets + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/MarkupSafe-1.1.1.dist-info/METADATA --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/MarkupSafe-1.1.1.dist-info/METADATA Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,94 @@ +Metadata-Version: 2.1 +Name: MarkupSafe +Version: 1.1.1 +Summary: Safely add untrusted strings to HTML/XML markup. +Home-page: https://palletsprojects.com/p/markupsafe/ +Author: Armin Ronacher +Author-email: armin.ronacher@active-4.com +Maintainer: The Pallets Team +Maintainer-email: contact@palletsprojects.com +License: BSD-3-Clause +Project-URL: Documentation, https://markupsafe.palletsprojects.com/ +Project-URL: Code, https://github.com/pallets/markupsafe +Project-URL: Issue tracker, https://github.com/pallets/markupsafe/issues +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Web Environment +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 3 +Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: Text Processing :: Markup :: HTML +Requires-Python: >=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.* +Description-Content-Type: text/x-rst + +MarkupSafe +========== + +MarkupSafe implements a text object that escapes characters so it is +safe to use in HTML and XML. Characters that have special meanings are +replaced so that they display as the actual characters. This mitigates +injection attacks, meaning untrusted user input can safely be displayed +on a page. + + +Installing +---------- + +Install and update using `pip`_: + +.. code-block:: text + + pip install -U MarkupSafe + +.. _pip: https://pip.pypa.io/en/stable/quickstart/ + + +Examples +-------- + +.. code-block:: pycon + + >>> from markupsafe import Markup, escape + >>> # escape replaces special characters and wraps in Markup + >>> escape('') + Markup(u'<script>alert(document.cookie);</script>') + >>> # wrap in Markup to mark text "safe" and prevent escaping + >>> Markup('Hello') + Markup('hello') + >>> escape(Markup('Hello')) + Markup('hello') + >>> # Markup is a text subclass (str on Python 3, unicode on Python 2) + >>> # methods and operators escape their arguments + >>> template = Markup("Hello %s") + >>> template % '"World"' + Markup('Hello "World"') + + +Donate +------ + +The Pallets organization develops and supports MarkupSafe and other +libraries that use it. In order to grow the community of contributors +and users, and allow the maintainers to devote more time to the +projects, `please donate today`_. + +.. _please donate today: https://palletsprojects.com/donate + + +Links +----- + +* Website: https://palletsprojects.com/p/markupsafe/ +* Documentation: https://markupsafe.palletsprojects.com/ +* Releases: https://pypi.org/project/MarkupSafe/ +* Code: https://github.com/pallets/markupsafe +* Issue tracker: https://github.com/pallets/markupsafe/issues +* Test status: https://dev.azure.com/pallets/markupsafe/_build +* Official chat: https://discord.gg/t6rrQZH + + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/MarkupSafe-1.1.1.dist-info/RECORD --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/MarkupSafe-1.1.1.dist-info/RECORD Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,16 @@ +MarkupSafe-1.1.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +MarkupSafe-1.1.1.dist-info/LICENSE.rst,sha256=SJqOEQhQntmKN7uYPhHg9-HTHwvY-Zp5yESOf_N9B-o,1475 +MarkupSafe-1.1.1.dist-info/METADATA,sha256=-XXnVvCxQP2QbHutIQq_7Pk9OATy-x0NC7gN_3_SCRE,3167 +MarkupSafe-1.1.1.dist-info/RECORD,, +MarkupSafe-1.1.1.dist-info/WHEEL,sha256=bRe_g_g-vZInZP5wOdewl-4AeWx8E2_UC3Ffr2csPyk,109 +MarkupSafe-1.1.1.dist-info/top_level.txt,sha256=qy0Plje5IJuvsCBjejJyhDCjEAdcDLK_2agVcex8Z6U,11 +markupsafe/__init__.py,sha256=oTblO5f9KFM-pvnq9bB0HgElnqkJyqHnFN1Nx2NIvnY,10126 +markupsafe/__pycache__/__init__.cpython-39.pyc,, +markupsafe/__pycache__/_compat.cpython-39.pyc,, +markupsafe/__pycache__/_constants.cpython-39.pyc,, +markupsafe/__pycache__/_native.cpython-39.pyc,, +markupsafe/_compat.py,sha256=uEW1ybxEjfxIiuTbRRaJpHsPFf4yQUMMKaPgYEC5XbU,558 +markupsafe/_constants.py,sha256=zo2ajfScG-l1Sb_52EP3MlDCqO7Y1BVHUXXKRsVDRNk,4690 +markupsafe/_native.py,sha256=d-8S_zzYt2y512xYcuSxq0NeG2DUUvG80wVdTn-4KI8,1873 +markupsafe/_speedups.c,sha256=k0fzEIK3CP6MmMqeY0ob43TP90mVN0DTyn7BAl3RqSg,9884 +markupsafe/_speedups.cpython-39-darwin.so,sha256=6pV4w38f3OduU0ZOZKLTJSqTak4UPPIW_FkIrMdXpJE,35224 diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/MarkupSafe-1.1.1.dist-info/WHEEL --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/MarkupSafe-1.1.1.dist-info/WHEEL Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.36.2) +Root-Is-Purelib: false +Tag: cp39-cp39-macosx_10_9_x86_64 + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/MarkupSafe-1.1.1.dist-info/top_level.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/MarkupSafe-1.1.1.dist-info/top_level.txt Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,1 @@ +markupsafe diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/PyYAML-5.4.1.dist-info/INSTALLER --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/PyYAML-5.4.1.dist-info/INSTALLER Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,1 @@ +pip diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/PyYAML-5.4.1.dist-info/LICENSE --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/PyYAML-5.4.1.dist-info/LICENSE Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,20 @@ +Copyright (c) 2017-2021 Ingy döt Net +Copyright (c) 2006-2016 Kirill Simonov + +Permission is hereby granted, free of charge, to any person obtaining a copy of +this software and associated documentation files (the "Software"), to deal in +the Software without restriction, including without limitation the rights to +use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies +of the Software, and to permit persons to whom the Software is furnished to do +so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/PyYAML-5.4.1.dist-info/METADATA --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/PyYAML-5.4.1.dist-info/METADATA Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,46 @@ +Metadata-Version: 2.1 +Name: PyYAML +Version: 5.4.1 +Summary: YAML parser and emitter for Python +Home-page: https://pyyaml.org/ +Author: Kirill Simonov +Author-email: xi@resolvent.net +License: MIT +Download-URL: https://pypi.org/project/PyYAML/ +Project-URL: Bug Tracker, https://github.com/yaml/pyyaml/issues +Project-URL: CI, https://github.com/yaml/pyyaml/actions +Project-URL: Documentation, https://pyyaml.org/wiki/PyYAMLDocumentation +Project-URL: Mailing lists, http://lists.sourceforge.net/lists/listinfo/yaml-core +Project-URL: Source Code, https://github.com/yaml/pyyaml +Platform: Any +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Cython +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: Text Processing :: Markup +Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.* + +YAML is a data serialization format designed for human readability +and interaction with scripting languages. PyYAML is a YAML parser +and emitter for Python. + +PyYAML features a complete YAML 1.1 parser, Unicode support, pickle +support, capable extension API, and sensible error messages. PyYAML +supports standard YAML tags and provides Python-specific tags that +allow to represent an arbitrary Python object. + +PyYAML is applicable for a broad range of tasks from complex +configuration files to object serialization and persistence. + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/PyYAML-5.4.1.dist-info/RECORD --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/PyYAML-5.4.1.dist-info/RECORD Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,60 @@ +PyYAML-5.4.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +PyYAML-5.4.1.dist-info/LICENSE,sha256=jTko-dxEkP1jVwfLiOsmvXZBAqcoKVQwfT5RZ6V36KQ,1101 +PyYAML-5.4.1.dist-info/METADATA,sha256=XnrM5LY-uS85ica26gKUK0dGG-xmPjmGfDTSLpIHQFk,2087 +PyYAML-5.4.1.dist-info/RECORD,, +PyYAML-5.4.1.dist-info/WHEEL,sha256=bRe_g_g-vZInZP5wOdewl-4AeWx8E2_UC3Ffr2csPyk,109 +PyYAML-5.4.1.dist-info/top_level.txt,sha256=rpj0IVMTisAjh_1vG3Ccf9v5jpCQwAz6cD1IVU5ZdhQ,11 +_yaml/__init__.py,sha256=04Ae_5osxahpJHa3XBZUAf4wi6XX32gR8D6X6p64GEA,1402 +_yaml/__pycache__/__init__.cpython-39.pyc,, +yaml/__init__.py,sha256=gfp2CbRVhzknghkiiJD2l6Z0pI-mv_iZHPSJ4aj0-nY,13170 +yaml/__pycache__/__init__.cpython-39.pyc,, +yaml/__pycache__/__init__.cpython-39.pyc,sha256=X88WKfrAE5fIr0H7hRLa95JDu8P_aRlbz5VM-kuUGUs,11863 +yaml/__pycache__/composer.cpython-39.pyc,, +yaml/__pycache__/composer.cpython-39.pyc,sha256=-rGk3Kor5pFxwqWVw0LNcM9_UAf_GfHTcizdG_hUpt0,3557 +yaml/__pycache__/constructor.cpython-39.pyc,, +yaml/__pycache__/constructor.cpython-39.pyc,sha256=-kbjmUKQhdsXYd1ssqgrFz36HJD89lH9QVlFYB9Ysjo,20804 +yaml/__pycache__/cyaml.cpython-39.pyc,, +yaml/__pycache__/cyaml.cpython-39.pyc,sha256=M5YXVIjWPq-T25HjRzx2DmiW8I311HXnL2zMuOF01qU,3347 +yaml/__pycache__/dumper.cpython-39.pyc,, +yaml/__pycache__/dumper.cpython-39.pyc,sha256=dAvDYQ2k7zDCKFVpQ5m_EyMWFCJuO7tQ1RZvUB-yaw8,1766 +yaml/__pycache__/emitter.cpython-39.pyc,, +yaml/__pycache__/emitter.cpython-39.pyc,sha256=mxOLyekb_du-qz6ABjMCMyIhyiD-KPMa47SLHWKEi9M,25337 +yaml/__pycache__/error.cpython-39.pyc,, +yaml/__pycache__/error.cpython-39.pyc,sha256=4EfYGIgytR8JQJd1HfziVQpdMquHdikg54Q9RaRU-VI,2317 +yaml/__pycache__/events.cpython-39.pyc,, +yaml/__pycache__/events.cpython-39.pyc,sha256=Ou8wWpg3KCrCtz9i5l2CCxzsE-9Q9D_rQ9R5ovDfLZk,3970 +yaml/__pycache__/loader.cpython-39.pyc,, +yaml/__pycache__/loader.cpython-39.pyc,sha256=TofT3-AAUOz1WftW5j63BdV_l46P8mrfuRKpQrrusU8,2191 +yaml/__pycache__/nodes.cpython-39.pyc,, +yaml/__pycache__/nodes.cpython-39.pyc,sha256=HiZzpvRY7qoJDteq9-GK8q66jzk4EMMF9IeUMeBxyeM,1723 +yaml/__pycache__/parser.cpython-39.pyc,, +yaml/__pycache__/parser.cpython-39.pyc,sha256=qVXCLueOw6VpLWtAgxHYnjyj375tT3vrENtaNty3yeE,11862 +yaml/__pycache__/reader.cpython-39.pyc,, +yaml/__pycache__/reader.cpython-39.pyc,sha256=jR4LGF7ylWwrxyQZ7AJAd_p5u9H08SOcqP59y5nBiBQ,4529 +yaml/__pycache__/representer.cpython-39.pyc,, +yaml/__pycache__/representer.cpython-39.pyc,sha256=gAx36SwKedpxb7qiN9OBKld9pZOQTs233XVDTKTTYkQ,10079 +yaml/__pycache__/resolver.cpython-39.pyc,, +yaml/__pycache__/resolver.cpython-39.pyc,sha256=NBDIM6eqL1xPTtmJwoGuvhFR1WFSUrBpXO2a5E6p6VY,5500 +yaml/__pycache__/scanner.cpython-39.pyc,, +yaml/__pycache__/scanner.cpython-39.pyc,sha256=yyaD6plHDNW49-RLT2NHBSFfw0pJGizoL1332irs_7Q,25247 +yaml/__pycache__/serializer.cpython-39.pyc,, +yaml/__pycache__/serializer.cpython-39.pyc,sha256=phOsn3Czz0or6yqVgiyy9RkWbbBEHp9P83dHOrD9OnU,3318 +yaml/__pycache__/tokens.cpython-39.pyc,, +yaml/__pycache__/tokens.cpython-39.pyc,sha256=er5RIfoFs-jnKy-xqj4Eb2ioKS-pYGTkO_eet5glZG0,4939 +yaml/_yaml.cpython-39-darwin.so,sha256=r6Nyi2WlBIUD8aiRPOT4xp8bFr42DY5SAT-O55v3Nus,465184 +yaml/composer.py,sha256=_Ko30Wr6eDWUeUpauUGT3Lcg9QPBnOPVlTnIMRGJ9FM,4883 +yaml/constructor.py,sha256=kNgkfaeLUkwQYY_Q6Ff1Tz2XVw_pG1xVE9Ak7z-viLA,28639 +yaml/cyaml.py,sha256=6ZrAG9fAYvdVe2FK_w0hmXoG7ZYsoYUwapG8CiC72H0,3851 +yaml/dumper.py,sha256=PLctZlYwZLp7XmeUdwRuv4nYOZ2UBnDIUy8-lKfLF-o,2837 +yaml/emitter.py,sha256=jghtaU7eFwg31bG0B7RZea_29Adi9CKmXq_QjgQpCkQ,43006 +yaml/error.py,sha256=Ah9z-toHJUbE9j-M8YpxgSRM5CgLCcwVzJgLLRF2Fxo,2533 +yaml/events.py,sha256=50_TksgQiE4up-lKo_V-nBy-tAIxkIPQxY5qDhKCeHw,2445 +yaml/loader.py,sha256=UVa-zIqmkFSCIYq_PgSGm4NSJttHY2Rf_zQ4_b1fHN0,2061 +yaml/nodes.py,sha256=gPKNj8pKCdh2d4gr3gIYINnPOaOxGhJAUiYhGRnPE84,1440 +yaml/parser.py,sha256=ilWp5vvgoHFGzvOZDItFoGjD6D42nhlZrZyjAwa0oJo,25495 +yaml/reader.py,sha256=0dmzirOiDG4Xo41RnuQS7K9rkY3xjHiVasfDMNTqCNw,6794 +yaml/representer.py,sha256=82UM3ZxUQKqsKAF4ltWOxCS6jGPIFtXpGs7mvqyv4Xs,14184 +yaml/resolver.py,sha256=Z1W8AOMA6Proy4gIO2OhUO4IPS_bFNAl0Ca3rwChpPg,8999 +yaml/scanner.py,sha256=KeQIKGNlSyPE8QDwionHxy9CgbqE5teJEz05FR9-nAg,51277 +yaml/serializer.py,sha256=ChuFgmhU01hj4xgI8GaKv6vfM2Bujwa9i7d2FAHj7cA,4165 +yaml/tokens.py,sha256=lTQIzSVw8Mg9wv459-TjiOQe6wVziqaRlqX2_89rp54,2573 diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/PyYAML-5.4.1.dist-info/WHEEL --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/PyYAML-5.4.1.dist-info/WHEEL Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.36.2) +Root-Is-Purelib: false +Tag: cp39-cp39-macosx_10_9_x86_64 + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/PyYAML-5.4.1.dist-info/top_level.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/PyYAML-5.4.1.dist-info/top_level.txt Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,2 @@ +_yaml +yaml diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/Routes-2.5.1.dist-info/INSTALLER --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/Routes-2.5.1.dist-info/INSTALLER Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,1 @@ +pip diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/Routes-2.5.1.dist-info/LICENSE.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/Routes-2.5.1.dist-info/LICENSE.txt Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,19 @@ +Copyright (c) 2005-2016 Ben Bangert + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/Routes-2.5.1.dist-info/METADATA --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/Routes-2.5.1.dist-info/METADATA Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,537 @@ +Metadata-Version: 2.1 +Name: Routes +Version: 2.5.1 +Summary: Routing Recognition and Generation Tools +Home-page: https://routes.readthedocs.io/ +Author: Ben Bangert +Author-email: ben@groovie.org +License: MIT +Keywords: routes webob dispatch +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Topic :: Internet :: WWW/HTTP +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Requires-Dist: six +Requires-Dist: repoze.lru (>=0.3) +Provides-Extra: docs +Requires-Dist: Sphinx ; extra == 'docs' +Requires-Dist: webob ; extra == 'docs' +Provides-Extra: middleware +Requires-Dist: webob ; extra == 'middleware' + +Routes is a Python re-implementation of the Rails routes system for mapping +URL's to Controllers/Actions and generating URL's. Routes makes it easy to +create pretty and concise URL's that are RESTful with little effort. + +Speedy and dynamic URL generation means you get a URL with minimal cruft +(no big dangling query args). Shortcut features like Named Routes cut down +on repetitive typing. + +See `the documentation for installation and usage of Routes `_. + + +Routes Changelog +%%%%%%%%%%%%%%%% + +Release 2.5.1 (October 13, 2020) +================================ +* Add compatibility for Python 3.7+. PR #99. + +Release 2.5.0 (October 13, 2020) +================================ + +* Add graceful fallback for invalid character encoding from request object. Patch by Phillip Baker. + PR #94. +* Enhanced performance for matching routes that share the same static prefix. Patch by George Sakkis. + PR #89. +* Fixed issue with child routes not passing route conditions to the Mapper.connect call. Patch by + Robin Abbi. PR #88. +* Fixed documentation to reflect default value for minimization. Patch by Marcin Raczyński. PR #86. +* Allow backslash to escape special characters in route paths. Patch by Orhan Kavrakoğlu. PR #83. +* Resolve invalid escape sequences. Patch by Stephen Finucane. PR #85. +* Remove support for Python 2.6, 3.3, and 3.4. Patch by Stephen Finucane. PR #85. +* Remove obsolete Python 2.3 compat code. Patch by Jakub Wilk. PR #80. + +Release 2.4.1 (January 1, 2017) +=============================== + +* Release as a universal wheel. PR #75. +* Convert readthedocs links for their .org -> .io migration for hosted projects. PR #67. + +Release 2.3.1 (March 30, 2016) +============================== +* Backwards compatability fix - connect should work with mandatory + routename and optional path. Patch by Davanum Srinivas (PR #65). + +Release 2.3 (March 28, 2016) +============================ +* Fix sub_domain equivalence check. Patch by Nikita Uvarov +* Add support for protocol-relative URLs generation (i.e. starting with double + slash ``//``). PR #60. Patch by Sviatoslav Sydorenko. +* Add support for the ``middleware`` extra requirement, making possible to + depend on ``webob`` optionally. PR #59. Patch by Sviatoslav Sydorenko. +* Fix matching of an empty string route, which led to exception in earlier + versions. PR #58. Patch by Sviatoslav Sydorenko. +* Add support for the ``requirements`` option when using + mapper.resource to create routes. PR #57. Patch by Sean Dague. +* Concatenation fix when using submappers with path prefixes. Multiple + submappers combined the path prefix inside the controller argument in + non-obvious ways. The controller argument will now be properly carried + through when using submappers. PR #28. + +Release 2.2 (July 21, 2015) +=========================== +* Fix Python 3 support. Patch by Victor Stinner. + +Release 2.1 (January 17, 2015) +============================== +* Fix 3 other route matching groups in route.py to use anonymous groups for + optional sections to avoid exceeding regex limits. Fixes #15. +* Printing a mapper now includes the Controller/action parameters from the + route. Fixes #11. +* Fix regression that didn't allow passing in params 'host', 'protocol', or + 'anchor'. They can now be passed in with a trailing '_' as was possible + before commit d1d1742903fa5ca24ef848a6ae895303f2661b2a. Fixes #7. +* URL generation with/without SCRIPT_NAME was resulting in the URL cache + failing to return the appropriate cached URL generation. The URL cache + should always include the SCRIPT_NAME, even if its empty, in the cache + to avoid this, and now does. Fixes #6. +* Extract Route creation into separate method in Mapper. Subclasses of Route + can be created by Mappers now. +* Use the first X_FORWARDED_FOR value if there are multiple proxies in the + path. Fixes #5. + +Release 2.0 (November 17, 2013) +=============================== +* Python 3.2/3.3 Support. Fixes Issue #2. Thanks to Alejandro Sánchez for + the pull request! + +Release 1.13 (March 12, 2012) +============================= +* Fix bug with dots forcing extension by default. The portion with the dot can + now be recognized. Patch by Michael Basnight. + +Release 1.12.3 (June 5, 2010) +============================= +* Fix bug with URLGenerator not properly including SCRIPT_NAME when generating + URL's and the singleton is not present. + +Release 1.12.2 (May 5, 2010) +============================ +* Fix bug with routes URLGenerator not properly including SCRIPT_NAME when + generating qualified URL's. + +Release 1.12.1 (March 11, 2010) +=============================== +* Fix bug with routes not generating URL's with callables in defaults. +* Fix bug with routes not handling sub-domain defaults during generation. + +Release 1.12 (February 28, 2010) +================================ +* Split up the Routes docs. +* Fix bug with relative URL's using qualified merging host and URL without + including the appropriate slash. Fixes #13. +* Fix bug with mapper.extend and Routes modifying their original args. + Fixes #24. +* Fix url.current() not returning current args when explicit is True. +* Added explicit way to directly use the Mapper to match with environ. +* Fix bug with improper len placement for submapper. +* Adding regular expression builder for entire regexp for faster rejection + in a single regexp match should none of the routes match. +* Give Mapper a tabular string representation. +* Make SubMapper objects nestable and add route-generation helpers. +* Add SubMapper-based collections. +* Make the deprecated Mapper.minimization False (disabled) by default. +* Make the mapper explicit (true) by default. + +Release 1.11 (September 28, 2009) +================================= +* Extensive documentation rewrite. +* Added Mapper.extend function that allows one to add lists of Routes objects + to the mapper in one batch, optionally with a path_prefix. +* Added Mapper.submapper function that returns a SubMapper object to enable + easier declaration of routes that have multiple keyword argument options + in common. +* Mapper controller_scan argument now handles None, and lists of controller + names in addition to a callable. +* Route object now takes a name parameter, which is the name it responds to. + This name is automatically added when called by using Mapper's connect + class method. +* Added optional LRU object for use with Routes when URL's change too often + for the Routes urlcache dict to be a viable option. + +Release 1.10.3 (February 8, 2009) +================================= +* Tweak to use WebOb Request rather than Paste. +* Performance tweaks for URL recognition. +* Bugfix for routes.middleware not re.escaping the path_info before moving it + to the script name. + +Release 1.10.2 (January 11, 2009) +================================= +* Bugfix for unicode encoding problems with non-minimized Route generation. + Spotted by Wichert Akkerman. +* Bugfix for when environ is {} in unit tests. + +Release 1.10.1 (September 27, 2008) +=================================== +* Removing LRU cache due to performance and threading issues. Cache does hit + a max-size for the given routes. + +Release 1.10 (September 24, 2008) +================================= +* Adding LRU cache instead of just dict for caching generated routes. This + avoids slow memory leakage over long-running and non-existent route + generation. +* Adding URLGenerator object. +* Adding redirect routes. +* Static routes can now interpolate variable parts in the path if using {} + variable part syntax. +* Added sub_domain condition option to accept False or None, to require that + there be no sub-domain provided for the route to match. + +Release 1.9.2 (July 8, 2008) +============================ +* Fixed bug in url_for which caused it to return a literal when it shouldn't + have. + +Release 1.9.1 (June 28, 2008) +============================= +* Fixed bug in formatted route recognition with formatting being absorbed + into the id. + +Release 1.9 (June 12, 2008) +=========================== +* Fix undefined arg bug in url_for. +* Fixed bug with url_for not working properly outside of a request when + sub-domains are active. Thanks Pavel Skvazh. +* Add non-minimization option to Routes and the Mapper for generation and + recognition. +* Add Routes 2.0 style syntax for making routes and regexp. For example, this + route will now work: '{controller}/{action}/{id}'. +* Fixed Routes to not use quote_plus when making URL's. +* WARNING: Mapper now comes with hardcode_names set to True by default. This + means routes generated by name must work for the URL. +* Actually respect having urlcache disabled. +* WARNING: Calling url_for with a set of args that returns None now throws an + exception. Code that previously checked to see if a url could be made must + be updated accordingly. +* Updated url_for to return url in a literal for use in templating that may + try to escape it again. +* Added option to use X_FORWARDED_PROTO for proxying behind https to work + easier. +* Fixed map.resource to be less restrictive on id than just spaces. +* Fixed Mapper.create_regs not being thread safe, particularly when + always_scan=True. + +Release 1.8 (March 28, 2008) +============================ +* Fixed bug of map.resource not allowing spaces in id. +* Fixed url generation to properly handle unicode defaults in addition to + unicode arguments. +* Fixed url_for to handle lists as keyword args when generating query + parameters. +* WARNING: Changed map.resource to not use ';', for actions, but the + normal '/'. This means that formatted URL's will also now have the format + come AFTER the action. Ie: /messsages/4.xml;rss -> /messages/4/rss.xml + +Release 1.7.3 (May 28th, 2008) +============================== +* Fixed triple escaping bug, since WSGI servers are responsible for basic + unescaping. + +Release 1.7.2 (Feb. 27th, 2008) +=============================== +* Fixed bug with keyword args not being coerced to raw string properly. + +Release 1.7.1 (Nov. 16th, 2007) +=============================== +* Fixed bug with sub-domains from route defaults getting encoded to unicode + resulting in a unicode route which then caused url_for to throw an + exception. +* Removed duplicate assignment in map.resource. Patch by Mike Naberezny. +* Applied test patch fix for path checking. Thanks Mike Naberezny. +* Added additional checking of remaining URL, to properly swallow periods in + the appropriate context. Fixes #57. +* Added mapper.hardcode_names option which restricts url generation to the + named route during generation rather than using the routes default options + during generation. +* Fixed the special '_method' attribute not being recognized during POST + requests of Content-Type 'multipart/form-data'. + +Release 1.7 (June 8th, 2007) +============================ +* Fixed url_unquoting to only apply for strings. +* Added _encoding option to individual routes to toggle decoding/encoding on a + per route basis. +* Fixed route matching so that '.' and other special chars are only part of the + match should they not be followed by that character. Fixed regexp creation so + that route parts with '.' in them aren't matched properly. Fixes #48. +* Fixed Unicode decoding/encoding so that the URL decoding and encoding can be + set on the mapper with mapper.encoding. Fixes #40. +* Don't assume environ['CONTENT_TYPE'] always exists: it may be omitted + according to the WSGI PEP. +* Fixed Unicode decode/encoding of path_info dynamic/wildcard parts so that + PATH_INFO will stay a raw string as it should. Fixes #51. +* Fixed url_for (thus redirect_to) to throw an exception if a Unicode + string is returned as that's an invalid URL. Fixes #46. +* Fixed Routes middleware to only parse POST's if the content type is + application/x-www-form-urlencoded for a HTML form. This properly avoids + parsing wsgi.input when it doesn't need to be. + +Release 1.6.3 (April 10th, 2007) +================================ +* Fixed matching so that an attempt to match an empty path raises a + RouteException. Fixes #44. +* Added ability to use characters in URL's such as '-' and '_' in + map.resource. Patch by Wyatt Baldwin. Fixes #45. +* Updated Mapper.resource handling with name_prefix and path_prefix checking + to specify defaults. Also ensures that should either of them be set, they + override the prefixes should parent_resource be specified. Patch by Wyatt + Baldwin. Fixes #42. +* Added utf-8 decoding of incoming path arguments, with fallback to ignoring + them in the very rare cases a malformed request URL is sent. Patch from + David Smith. +* Fixed treatment of '#' character as something that can be left off and + used in route paths. Found by Mike Orr. +* Added ability to specify parent resource to map.resource command. Patch from + Wyatt Baldwin. +* Fixed formatted route issue with map.resource when additional collection + methods are specified. Added unit tests to verify the collection methods + work properly. +* Updated URL parsing to properly use HTTP_HOST for hostname + port info before + falling back to SERVER_PORT and SERVER_NAME. Fixes #43. +* Added member_name and collection_name setting to Route object when made with + map.resource. +* Updated routes.middleware to make the Routes matched accessible as + environ['routes.route']. +* Updating mapper object to use thread local for request data (such as + environ) and middleware now deletes environ references at the end of the + request. +* Added explicit option to Routes and Mapper. Routes _explicit setting will + prevent the Route defaults from being implicitly set, while setting Mapper + to explicit will prevent Route implicit defaults and stop url_for from using + Route memory. Fixes #38. +* Updated config object so that the route is attached if possible. +* Adding standard logging usage with debug messages. +* Added additional test for normal '.' match and fixed new special matching to + match it properly. Thanks David Smith. +* Fixed hanging special char issue with 'special' URL chars at the end of a URL + that are missing the variable afterwards. +* Changed Routes generation and recognition to handle other 'special' URL chars + , . and ; as if they were /. This lets them be optionally left out of the + resulting generated URL. Feature requested by David Smith. +* Fixed lookahead assertion in regexp builder to properly handle two grouped + patterns in a row. +* Applied patch to generation and matching to handle Unicode characters + properly. Reported with patch by David Smith. + +Release 1.6.2 (Jan. 5, 2007) +============================ +* Fixed issue with method checking not properly handling different letter + cases in REQUEST_METHOD. Reported by Sean Davis. +* redirect_to now supports config.redirect returning a redirect, not just + raising one. + +Release 1.6.1 (Dec. 29, 2006) +============================= +* Fixed zipsafe flag to be False. + +Release 1.6 (Dec. 14th, 2006) +============================= +* Fixed append_slash to take effect in the route generation itself instead of + relying on url_for function. Reported by ToddG. +* Added additional url_for tests to ensure map.resource generates proper named + routes. +* WARNING: Changed map.resource initialization to accept individual member and + collection names to generate proper singular and plural route names. Those + using map.resource will need to update their routes and url_for statements + accordingly. +* Added additional map.resource recognition tests. +* Added WSGI middleware that does route resolving using new `WSGI.org Routing + Vars Spec `_. +* Added _absolute keyword option route connect to ignore SCRIPT_NAME settings. + Suggested by Ian Bicking. + +Release 1.5.2 (Oct. 16th, 2006) +=============================== +* Fixed qualified keyword to keep host port names when used, unless a host + is specifically passed in. Reported by Jon Rosebaugh. +* Added qualified keyword option to url_for to have it generate a full + URL. Resolves #29. +* Fixed examples in url_for doc strings so they'll be accurate. + +Release 1.5.1 (Oct. 4th, 2006) +============================== +* Fixed bug with escaping part names in the regular expression, reported by + James Taylor. + +Release 1.5 (Sept. 19th, 2006) +============================== +* Significant updates to map.resource and unit tests that comb it thoroughly + to ensure its creating all the proper routes (it now is). Increased unit + testing coverage to 95%. +* Added unit tests to ensure controller_scan works properly with nested + controller files and appropriately scans the directory structure. This + brings the Routes util module up to full code coverage. +* Fixed url_for so that when the protocol is changed, port information is + removed from the host. +* Added more thorough testing to _RequestConfig object and the ability to + set your own object. This increases testing coverage of the __init__ module + to 100%. +* Fixed bug with sub_domain not maintaining port information in url_for and + added unit tests. Reported by Jonathan Rosebaugh. +* Added unit tests to ensure sub_domain option works with named routes, cleaned + up url_for memory argument filtering. Fixed bug with named routes and sub_domain + option not working together, reported by Jonathan Rosebaugh. +* Changed order in which sub-domain is added to match-dict so it can be used + in a conditions function. + +Release 1.4.1 (Sept. 6th, 2006) +=============================== +* Added sub_domains option to mapper, along with sub_domains_ignore list for + subdomains that are considered equivilant to the main domain. When sub_domains + is active, url_for will now take a sub_domain option that can alter the host + the route will go to. +* Added ability for filter functions to provide a _host, _protocol, _anchor arg + which is then used to create the URL with the appropriate host/protocol/anchor + destination. +* Patch applied from Ticket #28. Resolves issue with Mapper's controller_scan + function requiring a valid directory argument. Submitted by Zoran Isailovski. + +Release 1.4 (July 21, 2006) +=========================== +* Fixed bug with map.resource related to member methods, found in Rails version. +* Fixed bug with map.resource member methods not requiring a member id. +* Fixed bug related to handling keyword argument controller. +* Added map.resource command which can automatically generate a batch of routes intended + to be used in a REST-ful manner by a web framework. +* Added URL generation handling for a 'method' argument. If 'method' is specified, it + is not dropped and will be changed to '_method' for use by the framework. +* Added conditions option to map.connect. Accepts a dict with optional keyword args + 'method' or 'function'. Method is a list of HTTP methods that are valid for the route. + Function is a function that will be called with environ, matchdict where matchdict is + the dict created by the URL match. +* Fixed redirect_to function for using absolute URL's. redirect_to now passes all args to + url_for, then passes the resulting URL to the redirect function. Reported by climbus. + +Release 1.3.2 (April 30th, 2006) +================================ +* Fixed _filter bug with inclusion in match dict during matching, reported by David Creemer. +* Fixed improper url quoting by using urllib.encode, patch by Jason Culverhouse. + +Release 1.3.1 (April 4th, 2006) +=============================== +* Mapper has an optional attribute ``append_slash``. When set to ``True``, any URL's + generated will have a slash appended to the end. +* Fixed prefix option so that if the PATH_INFO is empty after prefix regexp, its set to + '/' so the match proceeds ok. +* Fixed prefix bug that caused routes after the initial one to not see the proper url + for matching. Caught by Jochen Kupperschmidt. + +Release 1.3 (Feb. 25th, 2006) +============================= +* url_for keyword filters: + Named routes can now have a _filter argument that should specify a function that takes + a dict as its sole argument. The dict will contain the full set of keywords passed to + url_for, which the function can then modify as it pleases. The new dict will then be + used as if it was the original set of keyword args given to url_for. +* Fixed Python 2.3 incompatibility due to using keyword arg for a sort statement + when using the built-in controller scanner. + +Release 1.2 (Feb. 17th, 2006) +============================= +* If a named route doesn't exist, and a url_for call is used, instead of using the + keyword arguments to generate a URL, they will be used as query args for the raw + URL supplied. (Backwards Incompatible) +* If Mapper has debug=True, using match will return two additional values, the route + that matched, if one did match. And a list of routes that were tried, and information + about why they didn't pass. +* url_for enhancements: + Can now be used with 'raw' URL's to generate proper url's for static content that + will then automatically include SCRIPT_NAME if necessary + Static named routes can now be used to shortcut common path information as desired. +* Controller Scanner will now sort controller names so that the longest one is first. This + ensures that the deepest nested controller is executed first before more shallow ones to + increase predictability. +* Controller Scanner now scans directories properly, the version in 1.1 left off the + directory prefix when created the list of controllers. + (Thanks to Justin for drawing my attention to it) + +Release 1.1 (Jan. 13th, 2006) +============================= +* Routes Mapper additions: + Now takes several optional arguments that determine how it will + generate the regexp's. + Can now hold a function for use when determining what the available + controllers are. Comes with a default directory scanner + Given a directory for the default scanner or a function, the Mapper + will now automatically run it to get the controller list when needed +* Syntax available for splitting routes to allow more complex route paths, such + as ':controller/:(action)-:(id).html' +* Easier setup/integration with Routes per request. Setting the environ in a + WSGI environ will run match, and setup everything needed for url_for/etc. + +Release 1.0.2 (Dec. 30th, 2005) +=============================== +* Routes where a default was present but None were filling in improper values. +* Passing a 0 would evaluate to None during generation, resulting in missing + URL parts + +Release 1.0.1 (Dec. 18th, 2005) +=============================== +* Request Local Callable - You can now designate your own callable function that + should then be used to store the request_config data. This is most useful for + environments where its possible multiple requests might be running in a single + thread. The callable should return a request specific object for attributes to + be attached. See routes.__init__.py for more information. + +Release 1.0 (Nov. 21st, 2005) +============================= +* routes.__init__ will now load the common symbols most people will + want to actually use. + Thus, you can either:: + + from routes import * + + Or:: + + from routes import request_config, Mapper + + The following names are available for importing from routes:: + + request_config, Mapper, url_for, redirect_to + +* Route Names - You can now name a route, which will save a copy of the defaults + defined for later use by url_for or redirect_to. + Thus, a route and url_for looking like this:: + + m.connect('home', controller='blog', action='splash') + url_for(controller='blog', action='splash') # => /home + + Can now be used with a name:: + + m.connect('home_url','home', controller='blog', action='splash') + url_for('home_url') # => /home + + Additional keywords can still be added to url_for and will override defaults in + the named route. +* Trailing / - Route recognition earlier failed on trailing slashes, not really a bug, + not really a feature I guess. Anyways, trailing slashes are o.k. now as in the Rails + version. +* redirect_to now has two sets of tests to ensure it works properly + + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/Routes-2.5.1.dist-info/RECORD --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/Routes-2.5.1.dist-info/RECORD Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,18 @@ +Routes-2.5.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +Routes-2.5.1.dist-info/LICENSE.txt,sha256=JkhQv9ruOaVLZglLvNhsDeZrmr0lmPAgvdmNOeMmUAk,1078 +Routes-2.5.1.dist-info/METADATA,sha256=TzH1mWlM9kl0_X7Uxm2Xq9BzSYmjrPVZynPpr2uguWw,25346 +Routes-2.5.1.dist-info/RECORD,, +Routes-2.5.1.dist-info/WHEEL,sha256=ADKeyaGyKF5DwBNE0sRE5pvW-bSkFMJfBuhzZ3rceP4,110 +Routes-2.5.1.dist-info/top_level.txt,sha256=X0IsYfGZPW6lNH8wyJiaSyH7PrsKbBUZaUOgwdt4V6Q,7 +routes/__init__.py,sha256=rrpWr0110dQks26SvtCh6OeS7s5jFnvZTEtZ7BuS-Rg,5541 +routes/__pycache__/__init__.cpython-39.pyc,, +routes/__pycache__/base.cpython-39.pyc,, +routes/__pycache__/mapper.cpython-39.pyc,, +routes/__pycache__/middleware.cpython-39.pyc,, +routes/__pycache__/route.cpython-39.pyc,, +routes/__pycache__/util.cpython-39.pyc,, +routes/base.py,sha256=deoEwjOZqQfaoonw1q0_J7Bcnr6V5_xr044Jjo9hUm8,134 +routes/mapper.py,sha256=q5hMFknLcJUHB-QW2kJl87qHyWpDP2czuFHvX2e4gMs,50137 +routes/middleware.py,sha256=JVRiBQz8HjHE1OCWcfBQroPEE2lSGgq8Z4ylaNodqtM,6497 +routes/route.py,sha256=x64PzEo8kRz--34G4tGnrydRUOhC_sc7dChpjKq0an4,29723 +routes/util.py,sha256=SrPTZ3PJrK3drS8k228d2gQWQNcAVeAUJnU8IK_hzbE,19897 diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/Routes-2.5.1.dist-info/WHEEL --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/Routes-2.5.1.dist-info/WHEEL Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.35.1) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/Routes-2.5.1.dist-info/top_level.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/Routes-2.5.1.dist-info/top_level.txt Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,1 @@ +routes diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/__pycache__/allure.cpython-39.pyc Binary file env/lib/python3.9/site-packages/__pycache__/allure.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/__pycache__/appdirs.cpython-39.pyc Binary file env/lib/python3.9/site-packages/__pycache__/appdirs.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/__pycache__/bagit.cpython-39.pyc Binary file env/lib/python3.9/site-packages/__pycache__/bagit.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/__pycache__/configparser.cpython-39.pyc Binary file env/lib/python3.9/site-packages/__pycache__/configparser.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/__pycache__/decorator.cpython-39.pyc Binary file env/lib/python3.9/site-packages/__pycache__/decorator.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/__pycache__/dot_parser.cpython-39.pyc Binary file env/lib/python3.9/site-packages/__pycache__/dot_parser.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/__pycache__/filelock.cpython-39.pyc Binary file env/lib/python3.9/site-packages/__pycache__/filelock.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/__pycache__/mistune.cpython-39.pyc Binary file env/lib/python3.9/site-packages/__pycache__/mistune.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/__pycache__/mypy_extensions.cpython-39.pyc Binary file env/lib/python3.9/site-packages/__pycache__/mypy_extensions.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/__pycache__/oyaml.cpython-39.pyc Binary file env/lib/python3.9/site-packages/__pycache__/oyaml.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/__pycache__/pydot.cpython-39.pyc Binary file env/lib/python3.9/site-packages/__pycache__/pydot.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/__pycache__/pyparsing.cpython-39.pyc Binary file env/lib/python3.9/site-packages/__pycache__/pyparsing.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/__pycache__/six.cpython-39.pyc Binary file env/lib/python3.9/site-packages/__pycache__/six.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/__pycache__/tabulate.cpython-39.pyc Binary file env/lib/python3.9/site-packages/__pycache__/tabulate.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/__pycache__/typing_extensions.cpython-39.pyc Binary file env/lib/python3.9/site-packages/__pycache__/typing_extensions.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/_distutils_hack/__init__.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/_distutils_hack/__init__.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,123 @@ +import sys +import os +import re +import importlib +import warnings + + +is_pypy = '__pypy__' in sys.builtin_module_names + + +def warn_distutils_present(): + if 'distutils' not in sys.modules: + return + if is_pypy and sys.version_info < (3, 7): + # PyPy for 3.6 unconditionally imports distutils, so bypass the warning + # https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250 + return + warnings.warn( + "Distutils was imported before Setuptools, but importing Setuptools " + "also replaces the `distutils` module in `sys.modules`. This may lead " + "to undesirable behaviors or errors. To avoid these issues, avoid " + "using distutils directly, ensure that setuptools is installed in the " + "traditional way (e.g. not an editable install), and/or make sure " + "that setuptools is always imported before distutils.") + + +def clear_distutils(): + if 'distutils' not in sys.modules: + return + warnings.warn("Setuptools is replacing distutils.") + mods = [name for name in sys.modules if re.match(r'distutils\b', name)] + for name in mods: + del sys.modules[name] + + +def enabled(): + """ + Allow selection of distutils by environment variable. + """ + which = os.environ.get('SETUPTOOLS_USE_DISTUTILS', 'stdlib') + return which == 'local' + + +def ensure_local_distutils(): + clear_distutils() + distutils = importlib.import_module('setuptools._distutils') + distutils.__name__ = 'distutils' + sys.modules['distutils'] = distutils + + # sanity check that submodules load as expected + core = importlib.import_module('distutils.core') + assert '_distutils' in core.__file__, core.__file__ + + +def do_override(): + """ + Ensure that the local copy of distutils is preferred over stdlib. + + See https://github.com/pypa/setuptools/issues/417#issuecomment-392298401 + for more motivation. + """ + if enabled(): + warn_distutils_present() + ensure_local_distutils() + + +class DistutilsMetaFinder: + def find_spec(self, fullname, path, target=None): + if path is not None: + return + + method_name = 'spec_for_{fullname}'.format(**locals()) + method = getattr(self, method_name, lambda: None) + return method() + + def spec_for_distutils(self): + import importlib.abc + import importlib.util + + class DistutilsLoader(importlib.abc.Loader): + + def create_module(self, spec): + return importlib.import_module('setuptools._distutils') + + def exec_module(self, module): + pass + + return importlib.util.spec_from_loader('distutils', DistutilsLoader()) + + def spec_for_pip(self): + """ + Ensure stdlib distutils when running under pip. + See pypa/pip#8761 for rationale. + """ + if self.pip_imported_during_build(): + return + clear_distutils() + self.spec_for_distutils = lambda: None + + @staticmethod + def pip_imported_during_build(): + """ + Detect if pip is being imported in a build script. Ref #2355. + """ + import traceback + return any( + frame.f_globals['__file__'].endswith('setup.py') + for frame, line in traceback.walk_stack(None) + ) + + +DISTUTILS_FINDER = DistutilsMetaFinder() + + +def add_shim(): + sys.meta_path.insert(0, DISTUTILS_FINDER) + + +def remove_shim(): + try: + sys.meta_path.remove(DISTUTILS_FINDER) + except ValueError: + pass diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/_distutils_hack/__pycache__/__init__.cpython-39.pyc Binary file env/lib/python3.9/site-packages/_distutils_hack/__pycache__/__init__.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/_distutils_hack/__pycache__/override.cpython-39.pyc Binary file env/lib/python3.9/site-packages/_distutils_hack/__pycache__/override.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/_distutils_hack/override.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/_distutils_hack/override.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,1 @@ +__import__('_distutils_hack').do_override() diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/_yaml/__init__.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/_yaml/__init__.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,33 @@ +# This is a stub package designed to roughly emulate the _yaml +# extension module, which previously existed as a standalone module +# and has been moved into the `yaml` package namespace. +# It does not perfectly mimic its old counterpart, but should get +# close enough for anyone who's relying on it even when they shouldn't. +import yaml + +# in some circumstances, the yaml module we imoprted may be from a different version, so we need +# to tread carefully when poking at it here (it may not have the attributes we expect) +if not getattr(yaml, '__with_libyaml__', False): + from sys import version_info + + exc = ModuleNotFoundError if version_info >= (3, 6) else ImportError + raise exc("No module named '_yaml'") +else: + from yaml._yaml import * + import warnings + warnings.warn( + 'The _yaml extension module is now located at yaml._yaml' + ' and its location is subject to change. To use the' + ' LibYAML-based parser and emitter, import from `yaml`:' + ' `from yaml import CLoader as Loader, CDumper as Dumper`.', + DeprecationWarning + ) + del warnings + # Don't `del yaml` here because yaml is actually an existing + # namespace member of _yaml. + +__name__ = '_yaml' +# If the module is top-level (i.e. not a part of any specific package) +# then the attribute should be set to ''. +# https://docs.python.org/3.8/library/types.html +__package__ = '' diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/_yaml/__pycache__/__init__.cpython-39.pyc Binary file env/lib/python3.9/site-packages/_yaml/__pycache__/__init__.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,39 @@ +from allure_commons._allure import title +from allure_commons._allure import description, description_html +from allure_commons._allure import label +from allure_commons._allure import severity +from allure_commons._allure import tag +from allure_commons._allure import id +from allure_commons._allure import suite, parent_suite, sub_suite +from allure_commons._allure import epic, feature, story +from allure_commons._allure import link, issue, testcase +from allure_commons._allure import Dynamic as dynamic +from allure_commons._allure import step +from allure_commons._allure import attach +from allure_commons.types import Severity as severity_level +from allure_commons.types import AttachmentType as attachment_type + + +__all__ = [ + 'title', + 'description', + 'description_html', + 'label', + 'severity', + 'suite', + 'parent_suite', + 'sub_suite', + 'tag', + 'id', + 'epic', + 'feature', + 'story', + 'link', + 'issue', + 'testcase', + 'step', + 'dynamic', + 'severity_level', + 'attach', + 'attachment_type' +] diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/__init__.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_commons/__init__.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,12 @@ +from allure_commons._hooks import hookimpl # noqa: F401 +from allure_commons._core import plugin_manager # noqa: F401 +from allure_commons._allure import fixture # noqa: F401 +from allure_commons._allure import test # noqa: F401 + + +__all__ = [ + 'hookimpl', + 'plugin_manager', + 'fixture', + 'test' +] diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/__pycache__/__init__.cpython-39.pyc Binary file env/lib/python3.9/site-packages/allure_commons/__pycache__/__init__.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/__pycache__/_allure.cpython-39.pyc Binary file env/lib/python3.9/site-packages/allure_commons/__pycache__/_allure.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/__pycache__/_compat.cpython-39.pyc Binary file env/lib/python3.9/site-packages/allure_commons/__pycache__/_compat.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/__pycache__/_core.cpython-39.pyc Binary file env/lib/python3.9/site-packages/allure_commons/__pycache__/_core.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/__pycache__/_hooks.cpython-39.pyc Binary file env/lib/python3.9/site-packages/allure_commons/__pycache__/_hooks.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/__pycache__/lifecycle.cpython-39.pyc Binary file env/lib/python3.9/site-packages/allure_commons/__pycache__/lifecycle.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/__pycache__/logger.cpython-39.pyc Binary file env/lib/python3.9/site-packages/allure_commons/__pycache__/logger.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/__pycache__/mapping.cpython-39.pyc Binary file env/lib/python3.9/site-packages/allure_commons/__pycache__/mapping.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/__pycache__/model2.cpython-39.pyc Binary file env/lib/python3.9/site-packages/allure_commons/__pycache__/model2.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/__pycache__/reporter.cpython-39.pyc Binary file env/lib/python3.9/site-packages/allure_commons/__pycache__/reporter.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/__pycache__/types.cpython-39.pyc Binary file env/lib/python3.9/site-packages/allure_commons/__pycache__/types.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/__pycache__/utils.cpython-39.pyc Binary file env/lib/python3.9/site-packages/allure_commons/__pycache__/utils.cpython-39.pyc has changed diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/_allure.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_commons/_allure.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,244 @@ +from functools import wraps +from typing import Any, Callable, TypeVar + +from allure_commons._core import plugin_manager +from allure_commons.types import LabelType, LinkType +from allure_commons.utils import uuid4 +from allure_commons.utils import func_parameters, represent + +_TFunc = TypeVar("_TFunc", bound=Callable[..., Any]) + + +def safely(result): + if result: + return result[0] + else: + def dummy(function): + return function + return dummy + + +def title(test_title): + return safely(plugin_manager.hook.decorate_as_title(test_title=test_title)) + + +def description(test_description): + return safely(plugin_manager.hook.decorate_as_description(test_description=test_description)) + + +def description_html(test_description_html): + return safely(plugin_manager.hook.decorate_as_description_html(test_description_html=test_description_html)) + + +def label(label_type, *labels): + return safely(plugin_manager.hook.decorate_as_label(label_type=label_type, labels=labels)) + + +def severity(severity_level): + return label(LabelType.SEVERITY, severity_level) + + +def epic(*epics): + return label(LabelType.EPIC, *epics) + + +def feature(*features): + return label(LabelType.FEATURE, *features) + + +def story(*stories): + return label(LabelType.STORY, *stories) + + +def suite(suite_name): + return label(LabelType.SUITE, suite_name) + + +def parent_suite(parent_suite_name): + return label(LabelType.PARENT_SUITE, parent_suite_name) + + +def sub_suite(sub_suite_name): + return label(LabelType.SUB_SUITE, sub_suite_name) + + +def tag(*tags): + return label(LabelType.TAG, *tags) + + +def id(id): + return label(LabelType.ID, id) + + +def link(url, link_type=LinkType.LINK, name=None): + return safely(plugin_manager.hook.decorate_as_link(url=url, link_type=link_type, name=name)) + + +def issue(url, name=None): + return link(url, link_type=LinkType.ISSUE, name=name) + + +def testcase(url, name=None): + return link(url, link_type=LinkType.TEST_CASE, name=name) + + +class Dynamic(object): + + @staticmethod + def title(test_title): + plugin_manager.hook.add_title(test_title=test_title) + + @staticmethod + def description(test_description): + plugin_manager.hook.add_description(test_description=test_description) + + @staticmethod + def description_html(test_description_html): + plugin_manager.hook.add_description_html(test_description_html=test_description_html) + + @staticmethod + def label(label_type, *labels): + plugin_manager.hook.add_label(label_type=label_type, labels=labels) + + @staticmethod + def severity(severity_level): + Dynamic.label(LabelType.SEVERITY, severity_level) + + @staticmethod + def feature(*features): + Dynamic.label(LabelType.FEATURE, *features) + + @staticmethod + def story(*stories): + Dynamic.label(LabelType.STORY, *stories) + + @staticmethod + def tag(*tags): + Dynamic.label(LabelType.TAG, *tags) + + @staticmethod + def link(url, link_type=LinkType.LINK, name=None): + plugin_manager.hook.add_link(url=url, link_type=link_type, name=name) + + @staticmethod + def issue(url, name=None): + Dynamic.link(url, link_type=LinkType.ISSUE, name=name) + + @staticmethod + def testcase(url, name=None): + Dynamic.link(url, link_type=LinkType.TEST_CASE, name=name) + + @staticmethod + def suite(suite_name): + Dynamic.label(LabelType.SUITE, suite_name) + + @staticmethod + def parent_suite(parent_suite_name): + Dynamic.label(LabelType.PARENT_SUITE, parent_suite_name) + + @staticmethod + def sub_suite(sub_suite_name): + Dynamic.label(LabelType.SUB_SUITE, sub_suite_name) + + +def step(title): + if callable(title): + return StepContext(title.__name__, {})(title) + else: + return StepContext(title, {}) + + +class StepContext: + + def __init__(self, title, params): + self.title = title + self.params = params + self.uuid = uuid4() + + def __enter__(self): + plugin_manager.hook.start_step(uuid=self.uuid, title=self.title, params=self.params) + + def __exit__(self, exc_type, exc_val, exc_tb): + plugin_manager.hook.stop_step(uuid=self.uuid, title=self.title, exc_type=exc_type, exc_val=exc_val, + exc_tb=exc_tb) + + def __call__(self, func: _TFunc) -> _TFunc: + @wraps(func) + def impl(*a, **kw): + __tracebackhide__ = True + params = func_parameters(func, *a, **kw) + args = list(map(lambda x: represent(x), a)) + with StepContext(self.title.format(*args, **params), params): + return func(*a, **kw) + return impl + + +class Attach(object): + + def __call__(self, body, name=None, attachment_type=None, extension=None): + plugin_manager.hook.attach_data(body=body, name=name, attachment_type=attachment_type, extension=extension) + + def file(self, source, name=None, attachment_type=None, extension=None): + plugin_manager.hook.attach_file(source=source, name=name, attachment_type=attachment_type, extension=extension) + + +attach = Attach() + + +class fixture(object): + def __init__(self, fixture_function, parent_uuid=None, name=None): + self._fixture_function = fixture_function + self._parent_uuid = parent_uuid + self._name = name if name else fixture_function.__name__ + self._uuid = uuid4() + self.parameters = None + + def __call__(self, *args, **kwargs): + self.parameters = func_parameters(self._fixture_function, *args, **kwargs) + + with self: + return self._fixture_function(*args, **kwargs) + + def __enter__(self): + plugin_manager.hook.start_fixture(parent_uuid=self._parent_uuid, + uuid=self._uuid, + name=self._name, + parameters=self.parameters) + + def __exit__(self, exc_type, exc_val, exc_tb): + plugin_manager.hook.stop_fixture(parent_uuid=self._parent_uuid, + uuid=self._uuid, + name=self._name, + exc_type=exc_type, + exc_val=exc_val, + exc_tb=exc_tb) + + +class test(object): + def __init__(self, _test, context): + self._test = _test + self._uuid = uuid4() + self.context = context + self.parameters = None + + def __call__(self, *args, **kwargs): + self.parameters = func_parameters(self._test, *args, **kwargs) + + with self: + return self._test(*args, **kwargs) + + def __enter__(self): + plugin_manager.hook.start_test(parent_uuid=None, + uuid=self._uuid, + name=None, + parameters=self.parameters, + context=self.context) + + def __exit__(self, exc_type, exc_val, exc_tb): + plugin_manager.hook.stop_test(parent_uuid=None, + uuid=self._uuid, + name=None, + context=self.context, + exc_type=exc_type, + exc_val=exc_val, + exc_tb=exc_tb) diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/_compat.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_commons/_compat.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,80 @@ +from __future__ import absolute_import, division, print_function +import types + + +def format_exception_only(etype, value): + """Format the exception part of a traceback. + + The arguments are the exception type and value such as given by + sys.last_type and sys.last_value. The return value is a list of + strings, each ending in a newline. + + Normally, the list contains a single string; however, for + SyntaxError exceptions, it contains several lines that (when + printed) display detailed information about where the syntax + error occurred. + + The message indicating which exception occurred is always the last + string in the list. + + """ + + # An instance should not have a meaningful value parameter, but + # sometimes does, particularly for string exceptions, such as + # >>> raise string1, string2 # deprecated + # + # Clear these out first because issubtype(string1, SyntaxError) + # would throw another exception and mask the original problem. + if (isinstance(etype, BaseException) or + isinstance(etype, types.InstanceType) or + etype is None or type(etype) is str): # noqa: E129 + return [_format_final_exc_line(etype, value)] + + stype = etype.__name__ + + if not issubclass(etype, SyntaxError): + return [_format_final_exc_line(stype, value)] + + # It was a syntax error; show exactly where the problem was found. + lines = [] + try: + msg, (filename, lineno, offset, badline) = value.args + except Exception: + pass + else: + filename = filename or "" + lines.append(' File "%s", line %d\n' % (filename, lineno)) + if badline is not None: + lines.append(' %s\n' % badline.strip()) + if offset is not None: + caretspace = badline.rstrip('\n')[:offset].lstrip() + # non-space whitespace (likes tabs) must be kept for alignment + caretspace = ((c.isspace() and c or ' ') for c in caretspace) + # only three spaces to account for offset1 == pos 0 + lines.append(' %s^\n' % ''.join(caretspace)) + value = msg + + lines.append(_format_final_exc_line(stype, value)) + return lines + + +def _format_final_exc_line(etype, value): + """Return a list of a single line -- normal case for format_exception_only""" + valuestr = _some_str(value) + if value is None or not valuestr: + line = "%s\n" % etype + else: + line = "%s: %s\n" % (etype, valuestr) + return line + + +def _some_str(value): + try: + return str(value) + except UnicodeError: + try: + value = unicode(value) # noqa: F821 + return value.encode('utf-8', 'replace') + except Exception: + pass + return '' % type(value).__name__ diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/_core.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_commons/_core.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,25 @@ +import threading +from six import with_metaclass +from pluggy import PluginManager +from allure_commons import _hooks + + +class MetaPluginManager(type): + _storage = threading.local() + + @staticmethod + def get_plugin_manager(): + if not hasattr(MetaPluginManager._storage, 'plugin_manager'): + MetaPluginManager._storage.plugin_manager = PluginManager('allure') + MetaPluginManager._storage.plugin_manager.add_hookspecs(_hooks.AllureUserHooks) + MetaPluginManager._storage.plugin_manager.add_hookspecs(_hooks.AllureDeveloperHooks) + + return MetaPluginManager._storage.plugin_manager + + def __getattr__(cls, attr): + pm = MetaPluginManager.get_plugin_manager() + return getattr(pm, attr) + + +class plugin_manager(with_metaclass(MetaPluginManager)): + pass diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/_hooks.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_commons/_hooks.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,98 @@ +from pluggy import HookspecMarker, HookimplMarker + +hookspec = HookspecMarker("allure") +hookimpl = HookimplMarker("allure") + + +class AllureUserHooks(object): + + @hookspec + def decorate_as_title(self, test_title): + """ title """ + + @hookspec + def add_title(self, test_title): + """ title """ + + @hookspec + def decorate_as_description(self, test_description): + """ description """ + + @hookspec + def add_description(self, test_description): + """ description """ + + @hookspec + def decorate_as_description_html(self, test_description_html): + """ description html""" + + @hookspec + def add_description_html(self, test_description_html): + """ description html""" + + @hookspec + def decorate_as_label(self, label_type, labels): + """ label """ + + @hookspec + def add_label(self, label_type, labels): + """ label """ + + @hookspec + def decorate_as_link(self, url, link_type, name): + """ url """ + + @hookspec + def add_link(self, url, link_type, name): + """ url """ + + @hookspec + def start_step(self, uuid, title, params): + """ step """ + + @hookspec + def stop_step(self, uuid, exc_type, exc_val, exc_tb): + """ step """ + + @hookspec + def attach_data(self, body, name, attachment_type, extension): + """ attach data """ + + @hookspec + def attach_file(self, source, name, attachment_type, extension): + """ attach file """ + + +class AllureDeveloperHooks(object): + + @hookspec + def start_fixture(self, parent_uuid, uuid, name, parameters): + """ start fixture""" + + @hookspec + def stop_fixture(self, parent_uuid, uuid, name, exc_type, exc_val, exc_tb): + """ stop fixture """ + + @hookspec + def start_test(self, parent_uuid, uuid, name, parameters, context): + """ start test""" + + @hookspec + def stop_test(self, parent_uuid, uuid, name, context, exc_type, exc_val, exc_tb): + """ stop test """ + + @hookspec + def report_result(self, result): + """ reporting """ + + @hookspec + def report_container(self, container): + """ reporting """ + + @hookspec + def report_attached_file(self, source, file_name): + """ reporting """ + + @hookspec + def report_attached_data(self, body, file_name): + """ reporting """ diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/lifecycle.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_commons/lifecycle.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,147 @@ +from collections import OrderedDict +from contextlib import contextmanager +from allure_commons._core import plugin_manager +from allure_commons.model2 import TestResultContainer +from allure_commons.model2 import TestResult +from allure_commons.model2 import Attachment, ATTACHMENT_PATTERN +from allure_commons.model2 import TestStepResult +from allure_commons.model2 import ExecutableItem +from allure_commons.model2 import TestBeforeResult +from allure_commons.model2 import TestAfterResult +from allure_commons.utils import uuid4 +from allure_commons.utils import now +from allure_commons.types import AttachmentType + + +class AllureLifecycle(object): + def __init__(self): + self._items = OrderedDict() + + def _get_item(self, uuid=None, item_type=None): + uuid = uuid or self._last_item_uuid(item_type=item_type) + return self._items.get(uuid) + + def _pop_item(self, uuid=None, item_type=None): + uuid = uuid or self._last_item_uuid(item_type=item_type) + return self._items.pop(uuid, None) + + def _last_item_uuid(self, item_type=None): + for uuid in reversed(self._items): + item = self._items.get(uuid) + if item_type is None: + return uuid + elif type(item) == item_type or isinstance(item, item_type): + return uuid + + @contextmanager + def schedule_test_case(self, uuid=None): + test_result = TestResult() + test_result.uuid = uuid or uuid4() + self._items[test_result.uuid] = test_result + yield test_result + + @contextmanager + def update_test_case(self, uuid=None): + yield self._get_item(uuid=uuid, item_type=TestResult) + + def write_test_case(self, uuid=None): + test_result = self._pop_item(uuid=uuid, item_type=TestResult) + if test_result: + plugin_manager.hook.report_result(result=test_result) + + @contextmanager + def start_step(self, parent_uuid=None, uuid=None): + parent = self._get_item(uuid=parent_uuid, item_type=ExecutableItem) + step = TestStepResult() + step.start = now() + parent.steps.append(step) + self._items[uuid or uuid4()] = step + yield step + + @contextmanager + def update_step(self, uuid=None): + yield self._get_item(uuid=uuid, item_type=TestStepResult) + + def stop_step(self, uuid=None): + step = self._pop_item(uuid=uuid, item_type=TestStepResult) + if step and not step.stop: + step.stop = now() + + @contextmanager + def start_container(self, uuid=None): + container = TestResultContainer(uuid=uuid or uuid4()) + self._items[container.uuid] = container + yield container + + def containers(self): + for item in self._items.values(): + if type(item) == TestResultContainer: + yield item + + @contextmanager + def update_container(self, uuid=None): + yield self._get_item(uuid=uuid, item_type=TestResultContainer) + + def write_container(self, uuid=None): + container = self._pop_item(uuid=uuid, item_type=TestResultContainer) + if container and (container.befores or container.afters): + plugin_manager.hook.report_container(container=container) + + @contextmanager + def start_before_fixture(self, parent_uuid=None, uuid=None): + fixture = TestBeforeResult() + parent = self._get_item(uuid=parent_uuid, item_type=TestResultContainer) + if parent: + parent.befores.append(fixture) + self._items[uuid or uuid4()] = fixture + yield fixture + + @contextmanager + def update_before_fixture(self, uuid=None): + yield self._get_item(uuid=uuid, item_type=TestBeforeResult) + + def stop_before_fixture(self, uuid=None): + fixture = self._pop_item(uuid=uuid, item_type=TestBeforeResult) + if fixture and not fixture.stop: + fixture.stop = now() + + @contextmanager + def start_after_fixture(self, parent_uuid=None, uuid=None): + fixture = TestAfterResult() + parent = self._get_item(uuid=parent_uuid, item_type=TestResultContainer) + if parent: + parent.afters.append(fixture) + self._items[uuid or uuid4()] = fixture + yield fixture + + @contextmanager + def update_after_fixture(self, uuid=None): + yield self._get_item(uuid=uuid, item_type=TestAfterResult) + + def stop_after_fixture(self, uuid=None): + fixture = self._pop_item(uuid=uuid, item_type=TestAfterResult) + if fixture and not fixture.stop: + fixture.stop = now() + + def _attach(self, uuid, name=None, attachment_type=None, extension=None): + mime_type = attachment_type + extension = extension if extension else 'attach' + + if type(attachment_type) is AttachmentType: + extension = attachment_type.extension + mime_type = attachment_type.mime_type + + file_name = ATTACHMENT_PATTERN.format(prefix=uuid, ext=extension) + attachment = Attachment(source=file_name, name=name, type=mime_type) + uuid = self._last_item_uuid(item_type=ExecutableItem) + self._items[uuid].attachments.append(attachment) + + return file_name + + def attach_file(self, uuid, source, name=None, attachment_type=None, extension=None): + file_name = self._attach(uuid, name=name, attachment_type=attachment_type, extension=extension) + plugin_manager.hook.report_attached_file(source=source, file_name=file_name) + + def attach_data(self, uuid, body, name=None, attachment_type=None, extension=None): + file_name = self._attach(uuid, name=name, attachment_type=attachment_type, extension=extension) + plugin_manager.hook.report_attached_data(body=body, file_name=file_name) diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/logger.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_commons/logger.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,88 @@ +import errno +import io +import os +import sys +import json +import uuid +import shutil +from six import text_type +from attr import asdict +from allure_commons import hookimpl + +INDENT = 4 + + +class AllureFileLogger(object): + + def __init__(self, report_dir, clean=False): + self._report_dir = report_dir + + try: + os.makedirs(report_dir) + except OSError as e: + if e.errno != errno.EEXIST: + raise + elif clean: + for f in os.listdir(report_dir): + f = os.path.join(report_dir, f) + if os.path.isfile(f): + os.unlink(f) + + def _report_item(self, item): + indent = INDENT if os.environ.get("ALLURE_INDENT_OUTPUT") else None + filename = item.file_pattern.format(prefix=uuid.uuid4()) + data = asdict(item, filter=lambda attr, value: not (type(value) != bool and not bool(value))) + with io.open(os.path.join(self._report_dir, filename), 'w', encoding='utf8') as json_file: + if sys.version_info.major < 3: + json_file.write( + unicode(json.dumps(data, indent=indent, ensure_ascii=False, encoding='utf8'))) # noqa: F821 + else: + json.dump(data, json_file, indent=indent, ensure_ascii=False) + + @hookimpl + def report_result(self, result): + self._report_item(result) + + @hookimpl + def report_container(self, container): + self._report_item(container) + + @hookimpl + def report_attached_file(self, source, file_name): + destination = os.path.join(self._report_dir, file_name) + shutil.copy2(source, destination) + + @hookimpl + def report_attached_data(self, body, file_name): + destination = os.path.join(self._report_dir, file_name) + with open(destination, 'wb') as attached_file: + if isinstance(body, text_type): + attached_file.write(body.encode('utf-8')) + else: + attached_file.write(body) + + +class AllureMemoryLogger(object): + + def __init__(self): + self.test_cases = [] + self.test_containers = [] + self.attachments = {} + + @hookimpl + def report_result(self, result): + data = asdict(result, filter=lambda attr, value: not (type(value) != bool and not bool(value))) + self.test_cases.append(data) + + @hookimpl + def report_container(self, container): + data = asdict(container, filter=lambda attr, value: not (type(value) != bool and not bool(value))) + self.test_containers.append(data) + + @hookimpl + def report_attached_file(self, source, file_name): + pass + + @hookimpl + def report_attached_data(self, body, file_name): + self.attachments[file_name] = body diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/mapping.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_commons/mapping.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,119 @@ +from itertools import chain, islice +import attr +import re +from allure_commons.types import Severity, LabelType, LinkType +from allure_commons.types import ALLURE_UNIQUE_LABELS +from allure_commons.model2 import Label, Link + + +TAG_PREFIX = "allure" + +semi_sep = re.compile(r"allure[\.\w]+:") +eq_sep = re.compile(r"allure[\.\w]+=") + + +def allure_tag_sep(tag): + if semi_sep.search(tag): + return ":" + if eq_sep.search(tag): + return "=" + + +def __is(kind, t): + return kind in [v for k, v in t.__dict__.items() if not k.startswith('__')] + + +def parse_tag(tag, issue_pattern=None, link_pattern=None): + """ + >>> parse_tag("blocker") + Label(name='severity', value='blocker') + + >>> parse_tag("allure.issue:http://example.com/BUG-42") + Link(type='issue', url='http://example.com/BUG-42', name='http://example.com/BUG-42') + + >>> parse_tag("allure.link.home:http://qameta.io") + Link(type='link', url='http://qameta.io', name='home') + + >>> parse_tag("allure.suite:mapping") + Label(name='suite', value='mapping') + + >>> parse_tag("allure.suite:mapping") + Label(name='suite', value='mapping') + + >>> parse_tag("allure.label.owner:me") + Label(name='owner', value='me') + + >>> parse_tag("foo.label:1") + Label(name='tag', value='foo.label:1') + + >>> parse_tag("allure.foo:1") + Label(name='tag', value='allure.foo:1') + """ + sep = allure_tag_sep(tag) + schema, value = islice(chain(tag.split(sep, 1), [None]), 2) + prefix, kind, name = islice(chain(schema.split('.'), [None], [None]), 3) + + if tag in [severity for severity in Severity]: + return Label(name=LabelType.SEVERITY, value=tag) + + if prefix == TAG_PREFIX and value is not None: + + if __is(kind, LinkType): + if issue_pattern and kind == "issue" and not value.startswith("http"): + value = issue_pattern.format(value) + if link_pattern and kind == "link" and not value.startswith("http"): + value = link_pattern.format(value) + return Link(type=kind, name=name or value, url=value) + + if __is(kind, LabelType): + return Label(name=kind, value=value) + + if kind == "id": + return Label(name=LabelType.ID, value=value) + + if kind == "label" and name is not None: + return Label(name=name, value=value) + + return Label(name=LabelType.TAG, value=tag) + + +def labels_set(labels): + """ + >>> labels_set([Label(name=LabelType.SEVERITY, value=Severity.NORMAL), + ... Label(name=LabelType.SEVERITY, value=Severity.BLOCKER) + ... ]) + [Label(name='severity', value=)] + + >>> labels_set([Label(name=LabelType.SEVERITY, value=Severity.NORMAL), + ... Label(name='severity', value='minor') + ... ]) + [Label(name='severity', value='minor')] + + >>> labels_set([Label(name=LabelType.EPIC, value="Epic"), + ... Label(name=LabelType.EPIC, value="Epic") + ... ]) + [Label(name='epic', value='Epic')] + + >>> labels_set([Label(name=LabelType.EPIC, value="Epic1"), + ... Label(name=LabelType.EPIC, value="Epic2") + ... ]) + [Label(name='epic', value='Epic1'), Label(name='epic', value='Epic2')] + """ + class Wl(object): + def __init__(self, label): + self.label = label + + def __repr__(self): + return "{name}{value}".format(**attr.asdict(self.label)) + + def __eq__(self, other): + if self.label.name in ALLURE_UNIQUE_LABELS: + return self.label.name == other.label.name + return repr(self) == repr(other) + + def __hash__(self): + if self.label.name in ALLURE_UNIQUE_LABELS: + return hash(self.label.name) + return hash(repr(self)) + + return sorted([wl.label for wl in set([Wl(label) for label in reversed(labels)])]) diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/model2.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_commons/model2.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,108 @@ +from attr import attrs, attrib +from attr import Factory + + +TEST_GROUP_PATTERN = "{prefix}-container.json" +TEST_CASE_PATTERN = "{prefix}-result.json" +ATTACHMENT_PATTERN = '{prefix}-attachment.{ext}' +INDENT = 4 + + +@attrs +class TestResultContainer(object): + file_pattern = TEST_GROUP_PATTERN + + uuid = attrib(default=None) + name = attrib(default=None) + children = attrib(default=Factory(list)) + description = attrib(default=None) + descriptionHtml = attrib(default=None) + befores = attrib(default=Factory(list)) + afters = attrib(default=Factory(list)) + links = attrib(default=Factory(list)) + start = attrib(default=None) + stop = attrib(default=None) + + +@attrs +class ExecutableItem(object): + name = attrib(default=None) + status = attrib(default=None) + statusDetails = attrib(default=None) + stage = attrib(default=None) + description = attrib(default=None) + descriptionHtml = attrib(default=None) + steps = attrib(default=Factory(list)) + attachments = attrib(default=Factory(list)) + parameters = attrib(default=Factory(list)) + start = attrib(default=None) + stop = attrib(default=None) + + +@attrs +class TestResult(ExecutableItem): + file_pattern = TEST_CASE_PATTERN + + uuid = attrib(default=None) + historyId = attrib(default=None) + testCaseId = attrib(default=None) + fullName = attrib(default=None) + labels = attrib(default=Factory(list)) + links = attrib(default=Factory(list)) + + +@attrs +class TestStepResult(ExecutableItem): + id = attrib(default=None) + + +@attrs +class TestBeforeResult(ExecutableItem): + pass + + +@attrs +class TestAfterResult(ExecutableItem): + pass + + +@attrs +class Parameter(object): + name = attrib(default=None) + value = attrib(default=None) + + +@attrs +class Label(object): + name = attrib(default=None) + value = attrib(default=None) + + +@attrs +class Link(object): + type = attrib(default=None) + url = attrib(default=None) + name = attrib(default=None) + + +@attrs +class StatusDetails(object): + known = attrib(default=None) + flaky = attrib(default=None) + message = attrib(default=None) + trace = attrib(default=None) + + +@attrs +class Attachment(object): + name = attrib(default=None) + source = attrib(default=None) + type = attrib(default=None) + + +class Status(object): + FAILED = 'failed' + BROKEN = 'broken' + PASSED = 'passed' + SKIPPED = 'skipped' + UNKNOWN = 'unknown' diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/reporter.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_commons/reporter.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,117 @@ +from collections import OrderedDict + +from allure_commons.types import AttachmentType +from allure_commons.model2 import ExecutableItem +from allure_commons.model2 import TestResult +from allure_commons.model2 import Attachment, ATTACHMENT_PATTERN +from allure_commons.utils import now +from allure_commons._core import plugin_manager + + +class AllureReporter(object): + def __init__(self): + self._items = OrderedDict() + self._orphan_items = [] + + def _update_item(self, uuid, **kwargs): + item = self._items[uuid] if uuid else self._items[next(reversed(self._items))] + for name, value in kwargs.items(): + attr = getattr(item, name) + if isinstance(attr, list): + attr.append(value) + else: + setattr(item, name, value) + + def _last_executable(self): + for _uuid in reversed(self._items): + if isinstance(self._items[_uuid], ExecutableItem): + return _uuid + + def get_item(self, uuid): + return self._items.get(uuid) + + def get_last_item(self, item_type=None): + for _uuid in reversed(self._items): + if item_type is None: + return self._items.get(_uuid) + if type(self._items[_uuid]) == item_type: + return self._items.get(_uuid) + + def start_group(self, uuid, group): + self._items[uuid] = group + + def stop_group(self, uuid, **kwargs): + self._update_item(uuid, **kwargs) + group = self._items.pop(uuid) + plugin_manager.hook.report_container(container=group) + + def update_group(self, uuid, **kwargs): + self._update_item(uuid, **kwargs) + + def start_before_fixture(self, parent_uuid, uuid, fixture): + self._items.get(parent_uuid).befores.append(fixture) + self._items[uuid] = fixture + + def stop_before_fixture(self, uuid, **kwargs): + self._update_item(uuid, **kwargs) + self._items.pop(uuid) + + def start_after_fixture(self, parent_uuid, uuid, fixture): + self._items.get(parent_uuid).afters.append(fixture) + self._items[uuid] = fixture + + def stop_after_fixture(self, uuid, **kwargs): + self._update_item(uuid, **kwargs) + fixture = self._items.pop(uuid) + fixture.stop = now() + + def schedule_test(self, uuid, test_case): + self._items[uuid] = test_case + + def get_test(self, uuid): + return self.get_item(uuid) if uuid else self.get_last_item(TestResult) + + def close_test(self, uuid): + test_case = self._items.pop(uuid) + plugin_manager.hook.report_result(result=test_case) + + def drop_test(self, uuid): + self._items.pop(uuid) + + def start_step(self, parent_uuid, uuid, step): + parent_uuid = parent_uuid if parent_uuid else self._last_executable() + if parent_uuid is None: + self._orphan_items.append(uuid) + else: + self._items[parent_uuid].steps.append(step) + self._items[uuid] = step + + def stop_step(self, uuid, **kwargs): + if uuid in self._orphan_items: + self._orphan_items.remove(uuid) + else: + self._update_item(uuid, **kwargs) + self._items.pop(uuid) + + def _attach(self, uuid, name=None, attachment_type=None, extension=None): + mime_type = attachment_type + extension = extension if extension else 'attach' + + if type(attachment_type) is AttachmentType: + extension = attachment_type.extension + mime_type = attachment_type.mime_type + + file_name = ATTACHMENT_PATTERN.format(prefix=uuid, ext=extension) + attachment = Attachment(source=file_name, name=name, type=mime_type) + last_uuid = self._last_executable() + self._items[last_uuid].attachments.append(attachment) + + return file_name + + def attach_file(self, uuid, source, name=None, attachment_type=None, extension=None): + file_name = self._attach(uuid, name=name, attachment_type=attachment_type, extension=extension) + plugin_manager.hook.report_attached_file(source=source, file_name=file_name) + + def attach_data(self, uuid, body, name=None, attachment_type=None, extension=None): + file_name = self._attach(uuid, name=name, attachment_type=attachment_type, extension=extension) + plugin_manager.hook.report_attached_data(body=body, file_name=file_name) diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/types.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_commons/types.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,64 @@ +from enum import Enum + +ALLURE_UNIQUE_LABELS = ['severity', 'thread', 'host'] + + +class Severity(str, Enum): + BLOCKER = 'blocker' + CRITICAL = 'critical' + NORMAL = 'normal' + MINOR = 'minor' + TRIVIAL = 'trivial' + + +class LinkType(object): + LINK = 'link' + ISSUE = 'issue' + TEST_CASE = 'test_case' + + +class LabelType(str): + EPIC = 'epic' + FEATURE = 'feature' + STORY = 'story' + PARENT_SUITE = 'parentSuite' + SUITE = 'suite' + SUB_SUITE = 'subSuite' + SEVERITY = 'severity' + THREAD = 'thread' + HOST = 'host' + TAG = 'tag' + ID = 'as_id' + FRAMEWORK = 'framework' + LANGUAGE = 'language' + + +class AttachmentType(Enum): + + def __init__(self, mime_type, extension): + self.mime_type = mime_type + self.extension = extension + + TEXT = ("text/plain", "txt") + CSV = ("text/csv", "csv") + TSV = ("text/tab-separated-values", "tsv") + URI_LIST = ("text/uri-list", "uri") + + HTML = ("text/html", "html") + XML = ("application/xml", "xml") + JSON = ("application/json", "json") + YAML = ("application/yaml", "yaml") + PCAP = ("application/vnd.tcpdump.pcap", "pcap") + + PNG = ("image/png", "png") + JPG = ("image/jpg", "jpg") + SVG = ("image/svg-xml", "svg") + GIF = ("image/gif", "gif") + BMP = ("image/bmp", "bmp") + TIFF = ("image/tiff", "tiff") + + MP4 = ("video/mp4", "mp4") + OGG = ("video/ogg", "ogg") + WEBM = ("video/webm", "webm") + + PDF = ("application/pdf", "pdf") diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_commons/utils.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_commons/utils.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,403 @@ +# -*- coding: utf-8 -*- + +import os +import sys +import six +import time +import uuid +import json +import socket +import inspect +import hashlib +import platform +import threading +import traceback +import collections + +from functools import partial + + +def getargspec(func): + """ + Used because getargspec for python 2.7 does not accept functools.partial + which is the type for pytest fixtures. + + getargspec excerpted from: + + sphinx.util.inspect + ~~~~~~~~~~~~~~~~~~~ + Helpers for inspecting Python modules. + :copyright: Copyright 2007-2018 by the Sphinx team, see AUTHORS. + :license: BSD, see LICENSE for details. + + Like inspect.getargspec but supports functools.partial as well. + """ + # noqa: E731 type: (Any) -> Any + if inspect.ismethod(func): + func = func.__func__ + parts = 0, () # noqa: E731 type: Tuple[int, Tuple[unicode, ...]] + if type(func) is partial: + keywords = func.keywords + if keywords is None: + keywords = {} + parts = len(func.args), keywords.keys() + func = func.func + if not inspect.isfunction(func): + raise TypeError('%r is not a Python function' % func) + args, varargs, varkw = inspect.getargs(func.__code__) + func_defaults = func.__defaults__ + if func_defaults is None: + func_defaults = [] + else: + func_defaults = list(func_defaults) + if parts[0]: + args = args[parts[0]:] + if parts[1]: + for arg in parts[1]: + i = args.index(arg) - len(args) # type: ignore + del args[i] + try: + del func_defaults[i] + except IndexError: + pass + return inspect.ArgSpec(args, varargs, varkw, func_defaults) # type: ignore + + +if six.PY3: + from traceback import format_exception_only +else: + from _compat import format_exception_only + + +def md5(*args): + m = hashlib.md5() + for arg in args: + part = arg.encode('utf-8') + m.update(part) + return m.hexdigest() + + +def uuid4(): + return str(uuid.uuid4()) + + +def now(): + return int(round(1000 * time.time())) + + +def platform_label(): + major_version, _, __ = platform.python_version_tuple() + implementation = platform.python_implementation() + return '{implementation}{major_version}'.format(implementation=implementation.lower(), + major_version=major_version) + + +def thread_tag(): + return '{0}-{1}'.format(os.getpid(), threading.current_thread().name) + + +def host_tag(): + return socket.gethostname() + + +def escape_non_unicode_symbols(item): + if not (six.PY2 and isinstance(item, str)): + return item + + def escape_symbol(s): + try: + s.decode(encoding='UTF-8') + return s + except UnicodeDecodeError: + return repr(s)[1:-1] + + return ''.join(map(escape_symbol, item)) + + +def represent(item): + """ + >>> represent(None) + 'None' + + >>> represent(123) + '123' + + >>> import six + >>> expected = u"'hi'" if six.PY2 else "'hi'" + >>> represent('hi') == expected + True + + >>> expected = u"'привет'" if six.PY2 else "'привет'" + >>> represent(u'привет') == expected + True + + >>> represent(bytearray([0xd0, 0xbf])) # doctest: +ELLIPSIS + "<... 'bytearray'>" + + >>> from struct import pack + >>> result = "" if six.PY2 else "" + >>> represent(pack('h', 0x89)) == result + True + + >>> result = "" if six.PY2 else "" + >>> represent(int) == result + True + + >>> represent(represent) # doctest: +ELLIPSIS + '' + + >>> represent([represent]) # doctest: +ELLIPSIS + '[]' + + >>> class ClassWithName(object): + ... pass + + >>> represent(ClassWithName) + "" + """ + + if six.PY2 and isinstance(item, str): + try: + item = item.decode(encoding='UTF-8') + except UnicodeDecodeError: + pass + + if isinstance(item, six.text_type): + return u'\'%s\'' % item + elif isinstance(item, (bytes, bytearray)): + return repr(type(item)) + else: + return repr(item) + + +def func_parameters(func, *args, **kwargs): + """ + >>> def helper(func): + ... def wrapper(*args, **kwargs): + ... params = func_parameters(func, *args, **kwargs) + ... print(list(params.items())) + ... return func(*args, **kwargs) + ... return wrapper + + >>> @helper + ... def args(a, b): + ... pass + + >>> args(1, 2) + [('a', '1'), ('b', '2')] + + >>> args(*(1,2)) + [('a', '1'), ('b', '2')] + + >>> args(1, b=2) + [('a', '1'), ('b', '2')] + + >>> @helper + ... def kwargs(a=1, b=2): + ... pass + + >>> kwargs() + [('a', '1'), ('b', '2')] + + >>> kwargs(a=3, b=4) + [('a', '3'), ('b', '4')] + + >>> kwargs(b=4, a=3) + [('a', '3'), ('b', '4')] + + >>> kwargs(a=3) + [('a', '3'), ('b', '2')] + + >>> kwargs(b=4) + [('a', '1'), ('b', '4')] + + >>> @helper + ... def args_kwargs(a, b, c=3, d=4): + ... pass + + >>> args_kwargs(1, 2) + [('a', '1'), ('b', '2'), ('c', '3'), ('d', '4')] + + >>> args_kwargs(1, 2, d=5) + [('a', '1'), ('b', '2'), ('c', '3'), ('d', '5')] + + >>> args_kwargs(1, 2, 5, 6) + [('a', '1'), ('b', '2'), ('c', '5'), ('d', '6')] + + >>> @helper + ... def varargs(*a): + ... pass + + >>> varargs() + [] + + >>> varargs(1, 2) + [('a', '(1, 2)')] + + >>> @helper + ... def keywords(**a): + ... pass + + >>> keywords() + [] + + >>> keywords(a=1, b=2) + [('a', '1'), ('b', '2')] + + >>> @helper + ... def args_varargs(a, b, *c): + ... pass + + >>> args_varargs(1, 2) + [('a', '1'), ('b', '2')] + + >>> args_varargs(1, 2, 2) + [('a', '1'), ('b', '2'), ('c', '(2,)')] + + >>> @helper + ... def args_kwargs_varargs(a, b, c=3, **d): + ... pass + + >>> args_kwargs_varargs(1, 2) + [('a', '1'), ('b', '2'), ('c', '3')] + + >>> args_kwargs_varargs(1, 2, 4, d=5, e=6) + [('a', '1'), ('b', '2'), ('c', '4'), ('d', '5'), ('e', '6')] + + >>> @helper + ... def args_kwargs_varargs_keywords(a, b=2, *c, **d): + ... pass + + >>> args_kwargs_varargs_keywords(1) + [('a', '1'), ('b', '2')] + + >>> args_kwargs_varargs_keywords(1, 2, 4, d=5, e=6) + [('a', '1'), ('b', '2'), ('c', '(4,)'), ('d', '5'), ('e', '6')] + + >>> class Class(object): + ... @staticmethod + ... @helper + ... def static_args(a, b): + ... pass + ... + ... @classmethod + ... @helper + ... def method_args(cls, a, b): + ... pass + ... + ... @helper + ... def args(self, a, b): + ... pass + + >>> cls = Class() + + >>> cls.args(1, 2) + [('a', '1'), ('b', '2')] + + >>> cls.method_args(1, 2) + [('a', '1'), ('b', '2')] + + >>> cls.static_args(1, 2) + [('a', '1'), ('b', '2')] + + """ + parameters = {} + arg_spec = getargspec(func) if six.PY2 else inspect.getfullargspec(func) + arg_order = list(arg_spec.args) + args_dict = dict(zip(arg_spec.args, args)) + + if arg_spec.defaults: + kwargs_defaults_dict = dict(zip(arg_spec.args[len(args):], arg_spec.defaults)) + parameters.update(kwargs_defaults_dict) + + if arg_spec.varargs: + arg_order.append(arg_spec.varargs) + varargs = args[len(arg_spec.args):] + parameters.update({arg_spec.varargs: varargs} if varargs else {}) + + if arg_spec.args and arg_spec.args[0] in ['cls', 'self']: + args_dict.pop(arg_spec.args[0], None) + + if kwargs: + if sys.version_info < (3, 6): + # Sort alphabetically as old python versions does + # not preserve call order for kwargs + arg_order.extend(sorted(list(kwargs.keys()))) + else: + # Keep py3.6 behaviour to preserve kwargs order + arg_order.extend(list(kwargs.keys())) + parameters.update(kwargs) + + parameters.update(args_dict) + + items = parameters.iteritems() if six.PY2 else parameters.items() + sorted_items = sorted(map(lambda kv: (kv[0], represent(kv[1])), items), key=lambda x: arg_order.index(x[0])) + + return collections.OrderedDict(sorted_items) + + +def format_traceback(exc_traceback): + return ''.join(traceback.format_tb(exc_traceback)) if exc_traceback else None + + +def format_exception(etype, value): + """ + >>> import sys + + >>> try: + ... assert False, u'Привет' + ... except AssertionError: + ... etype, e, _ = sys.exc_info() + ... format_exception(etype, e) # doctest: +ELLIPSIS + 'AssertionError: ...\\n' + + >>> try: + ... assert False, 'Привет' + ... except AssertionError: + ... etype, e, _ = sys.exc_info() + ... format_exception(etype, e) # doctest: +ELLIPSIS + 'AssertionError: ...\\n' + + >>> try: + ... compile("bla u'Привет'", "fake.py", "exec") + ... except SyntaxError: + ... etype, e, _ = sys.exc_info() + ... format_exception(etype, e) # doctest: +ELLIPSIS + ' File "fake.py", line 1...SyntaxError: invalid syntax\\n' + + >>> try: + ... compile("bla 'Привет'", "fake.py", "exec") + ... except SyntaxError: + ... etype, e, _ = sys.exc_info() + ... format_exception(etype, e) # doctest: +ELLIPSIS + ' File "fake.py", line 1...SyntaxError: invalid syntax\\n' + + >>> from hamcrest import assert_that, equal_to + + >>> try: + ... assert_that('left', equal_to('right')) + ... except AssertionError: + ... etype, e, _ = sys.exc_info() + ... format_exception(etype, e) # doctest: +ELLIPSIS + "AssertionError: \\nExpected:...but:..." + + >>> try: + ... assert_that(u'left', equal_to(u'right')) + ... except AssertionError: + ... etype, e, _ = sys.exc_info() + ... format_exception(etype, e) # doctest: +ELLIPSIS + "AssertionError: \\nExpected:...but:..." + """ + return '\n'.join(format_exception_only(etype, value)) if etype or value else None + + +def get_testplan(): + planned_tests = [] + file_path = os.environ.get("ALLURE_TESTPLAN_PATH") + + if file_path and os.path.exists(file_path): + with open(file_path, 'r') as plan_file: + plan = json.load(plan_file) + planned_tests = plan.get("tests", []) + + return planned_tests diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_python_commons-2.8.36.dist-info/INSTALLER --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_python_commons-2.8.36.dist-info/INSTALLER Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,1 @@ +pip diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_python_commons-2.8.36.dist-info/METADATA --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_python_commons-2.8.36.dist-info/METADATA Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,23 @@ +Metadata-Version: 2.1 +Name: allure-python-commons +Version: 2.8.36 +Summary: Common module for integrate allure with python-based frameworks +Home-page: https://github.com/allure-framework/allure-python +Author: QAMetaSoftware, Stanislav Seliverstov +Author-email: sseliverstov@qameta.io +License: Apache-2.0 +Keywords: allure reporting report-engine +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Topic :: Software Development :: Quality Assurance +Classifier: Topic :: Software Development :: Testing +Requires-Dist: attrs (>=16.0.0) +Requires-Dist: six (>=1.9.0) +Requires-Dist: pluggy (>=0.4.0) +Requires-Dist: enum34 ; python_version < "3.4" + +UNKNOWN + + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_python_commons-2.8.36.dist-info/RECORD --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_python_commons-2.8.36.dist-info/RECORD Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,31 @@ +__pycache__/allure.cpython-39.pyc,, +allure.py,sha256=tI_MyEfGvV5Fzf37BDQgC7_-dvwcTAWLOYbpfB33BbA,1048 +allure_commons/__init__.py,sha256=VSaOVESKG8r9vKRS3IMuW0Fc00WJ7zxtKj-XtS3kGZY,310 +allure_commons/__pycache__/__init__.cpython-39.pyc,, +allure_commons/__pycache__/_allure.cpython-39.pyc,, +allure_commons/__pycache__/_compat.cpython-39.pyc,, +allure_commons/__pycache__/_core.cpython-39.pyc,, +allure_commons/__pycache__/_hooks.cpython-39.pyc,, +allure_commons/__pycache__/lifecycle.cpython-39.pyc,, +allure_commons/__pycache__/logger.cpython-39.pyc,, +allure_commons/__pycache__/mapping.cpython-39.pyc,, +allure_commons/__pycache__/model2.cpython-39.pyc,, +allure_commons/__pycache__/reporter.cpython-39.pyc,, +allure_commons/__pycache__/types.cpython-39.pyc,, +allure_commons/__pycache__/utils.cpython-39.pyc,, +allure_commons/_allure.py,sha256=GbqJ9QMjaw4NA_tFfHKDXRDrLu94DJutju14jyPpYNw,7418 +allure_commons/_compat.py,sha256=MtVJLuMH8QB4s5EfG5rdwMXYAW-2MQl8a41CygNphsg,2803 +allure_commons/_core.py,sha256=Y8eRZtLaVljVxjGF_dA4__rCMrWPK7RXtsJoFl2vTx0,813 +allure_commons/_hooks.py,sha256=PU0eTFnshlGKRDXdDwBPMhaEyDYjhZ6qPd7J_5Bxfg0,2329 +allure_commons/lifecycle.py,sha256=bgofnwvophkjfmwwlqLiMP2dmC3L3DHy8liR--2N4kU,5699 +allure_commons/logger.py,sha256=TMmpTohzlrJEiptyTXVqGqFNbcDz9hoeBLAx2nV7ORU,2763 +allure_commons/mapping.py,sha256=ZWyFLgh7neZ3_bPBjTzSFGtu9a2o8e2Y_krG0bXPb5Y,3860 +allure_commons/model2.py,sha256=YG08Bda6X1-4NyJtJoJSrXX79RV6UJWo6G9QFp1pCLk,2401 +allure_commons/reporter.py,sha256=3rF3p6RUl_MHMvf1YDhvyH9wiRK5ZBIgevTX_ocKWZk,4351 +allure_commons/types.py,sha256=Ne5Haj1y4zE-uOix5KPBdgN5bmiuO3_yS4NEGBX-28E,1463 +allure_commons/utils.py,sha256=piAiEASbVkySZpRwbWfvsXvruIAipPKnaZI1-ZEPaMY,10319 +allure_python_commons-2.8.36.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +allure_python_commons-2.8.36.dist-info/METADATA,sha256=s6wrcn74NtAtwcM4i29Luyggaxl6a-l0GERmxtpo5qU,792 +allure_python_commons-2.8.36.dist-info/RECORD,, +allure_python_commons-2.8.36.dist-info/WHEEL,sha256=OqRkF0eY5GHssMorFjlbTIq072vpHpF60fIQA6lS9xA,92 +allure_python_commons-2.8.36.dist-info/top_level.txt,sha256=RTeAMoq_LtMxZ4bnDaup54pNGO9rhYvgY8CTJLm9Sdk,22 diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_python_commons-2.8.36.dist-info/WHEEL --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_python_commons-2.8.36.dist-info/WHEEL Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.36.2) +Root-Is-Purelib: true +Tag: py3-none-any + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/allure_python_commons-2.8.36.dist-info/top_level.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/allure_python_commons-2.8.36.dist-info/top_level.txt Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,2 @@ +allure +allure_commons diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/appdirs-1.4.4.dist-info/INSTALLER --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/appdirs-1.4.4.dist-info/INSTALLER Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,1 @@ +pip diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/appdirs-1.4.4.dist-info/LICENSE.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/appdirs-1.4.4.dist-info/LICENSE.txt Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,23 @@ +# This is the MIT license + +Copyright (c) 2010 ActiveState Software Inc. + +Permission is hereby granted, free of charge, to any person obtaining a +copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be included +in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. +IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, +TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE +SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/appdirs-1.4.4.dist-info/METADATA --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/appdirs-1.4.4.dist-info/METADATA Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,264 @@ +Metadata-Version: 2.1 +Name: appdirs +Version: 1.4.4 +Summary: A small Python module for determining appropriate platform-specific dirs, e.g. a "user data dir". +Home-page: http://github.com/ActiveState/appdirs +Author: Trent Mick +Author-email: trentm@gmail.com +Maintainer: Jeff Rouse +Maintainer-email: jr@its.to +License: MIT +Keywords: application directory log cache user +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Topic :: Software Development :: Libraries :: Python Modules + + +.. image:: https://secure.travis-ci.org/ActiveState/appdirs.png + :target: http://travis-ci.org/ActiveState/appdirs + +the problem +=========== + +What directory should your app use for storing user data? If running on Mac OS X, you +should use:: + + ~/Library/Application Support/ + +If on Windows (at least English Win XP) that should be:: + + C:\Documents and Settings\\Application Data\Local Settings\\ + +or possibly:: + + C:\Documents and Settings\\Application Data\\ + +for `roaming profiles `_ but that is another story. + +On Linux (and other Unices) the dir, according to the `XDG +spec `_, is:: + + ~/.local/share/ + + +``appdirs`` to the rescue +========================= + +This kind of thing is what the ``appdirs`` module is for. ``appdirs`` will +help you choose an appropriate: + +- user data dir (``user_data_dir``) +- user config dir (``user_config_dir``) +- user cache dir (``user_cache_dir``) +- site data dir (``site_data_dir``) +- site config dir (``site_config_dir``) +- user log dir (``user_log_dir``) + +and also: + +- is a single module so other Python packages can include their own private copy +- is slightly opinionated on the directory names used. Look for "OPINION" in + documentation and code for when an opinion is being applied. + + +some example output +=================== + +On Mac OS X:: + + >>> from appdirs import * + >>> appname = "SuperApp" + >>> appauthor = "Acme" + >>> user_data_dir(appname, appauthor) + '/Users/trentm/Library/Application Support/SuperApp' + >>> site_data_dir(appname, appauthor) + '/Library/Application Support/SuperApp' + >>> user_cache_dir(appname, appauthor) + '/Users/trentm/Library/Caches/SuperApp' + >>> user_log_dir(appname, appauthor) + '/Users/trentm/Library/Logs/SuperApp' + +On Windows 7:: + + >>> from appdirs import * + >>> appname = "SuperApp" + >>> appauthor = "Acme" + >>> user_data_dir(appname, appauthor) + 'C:\\Users\\trentm\\AppData\\Local\\Acme\\SuperApp' + >>> user_data_dir(appname, appauthor, roaming=True) + 'C:\\Users\\trentm\\AppData\\Roaming\\Acme\\SuperApp' + >>> user_cache_dir(appname, appauthor) + 'C:\\Users\\trentm\\AppData\\Local\\Acme\\SuperApp\\Cache' + >>> user_log_dir(appname, appauthor) + 'C:\\Users\\trentm\\AppData\\Local\\Acme\\SuperApp\\Logs' + +On Linux:: + + >>> from appdirs import * + >>> appname = "SuperApp" + >>> appauthor = "Acme" + >>> user_data_dir(appname, appauthor) + '/home/trentm/.local/share/SuperApp + >>> site_data_dir(appname, appauthor) + '/usr/local/share/SuperApp' + >>> site_data_dir(appname, appauthor, multipath=True) + '/usr/local/share/SuperApp:/usr/share/SuperApp' + >>> user_cache_dir(appname, appauthor) + '/home/trentm/.cache/SuperApp' + >>> user_log_dir(appname, appauthor) + '/home/trentm/.cache/SuperApp/log' + >>> user_config_dir(appname) + '/home/trentm/.config/SuperApp' + >>> site_config_dir(appname) + '/etc/xdg/SuperApp' + >>> os.environ['XDG_CONFIG_DIRS'] = '/etc:/usr/local/etc' + >>> site_config_dir(appname, multipath=True) + '/etc/SuperApp:/usr/local/etc/SuperApp' + + +``AppDirs`` for convenience +=========================== + +:: + + >>> from appdirs import AppDirs + >>> dirs = AppDirs("SuperApp", "Acme") + >>> dirs.user_data_dir + '/Users/trentm/Library/Application Support/SuperApp' + >>> dirs.site_data_dir + '/Library/Application Support/SuperApp' + >>> dirs.user_cache_dir + '/Users/trentm/Library/Caches/SuperApp' + >>> dirs.user_log_dir + '/Users/trentm/Library/Logs/SuperApp' + + + +Per-version isolation +===================== + +If you have multiple versions of your app in use that you want to be +able to run side-by-side, then you may want version-isolation for these +dirs:: + + >>> from appdirs import AppDirs + >>> dirs = AppDirs("SuperApp", "Acme", version="1.0") + >>> dirs.user_data_dir + '/Users/trentm/Library/Application Support/SuperApp/1.0' + >>> dirs.site_data_dir + '/Library/Application Support/SuperApp/1.0' + >>> dirs.user_cache_dir + '/Users/trentm/Library/Caches/SuperApp/1.0' + >>> dirs.user_log_dir + '/Users/trentm/Library/Logs/SuperApp/1.0' + + + +appdirs Changelog +================= + +appdirs 1.4.4 +------------- +- [PR #92] Don't import appdirs from setup.py + +Project officially classified as Stable which is important +for inclusion in other distros such as ActivePython. + +First of several incremental releases to catch up on maintenance. + +appdirs 1.4.3 +------------- +- [PR #76] Python 3.6 invalid escape sequence deprecation fixes +- Fix for Python 3.6 support + +appdirs 1.4.2 +------------- +- [PR #84] Allow installing without setuptools +- [PR #86] Fix string delimiters in setup.py description +- Add Python 3.6 support + +appdirs 1.4.1 +------------- +- [issue #38] Fix _winreg import on Windows Py3 +- [issue #55] Make appname optional + +appdirs 1.4.0 +------------- +- [PR #42] AppAuthor is now optional on Windows +- [issue 41] Support Jython on Windows, Mac, and Unix-like platforms. Windows + support requires `JNA `_. +- [PR #44] Fix incorrect behaviour of the site_config_dir method + +appdirs 1.3.0 +------------- +- [Unix, issue 16] Conform to XDG standard, instead of breaking it for + everybody +- [Unix] Removes gratuitous case mangling of the case, since \*nix-es are + usually case sensitive, so mangling is not wise +- [Unix] Fixes the utterly wrong behaviour in ``site_data_dir``, return result + based on XDG_DATA_DIRS and make room for respecting the standard which + specifies XDG_DATA_DIRS is a multiple-value variable +- [Issue 6] Add ``*_config_dir`` which are distinct on nix-es, according to + XDG specs; on Windows and Mac return the corresponding ``*_data_dir`` + +appdirs 1.2.0 +------------- + +- [Unix] Put ``user_log_dir`` under the *cache* dir on Unix. Seems to be more + typical. +- [issue 9] Make ``unicode`` work on py3k. + +appdirs 1.1.0 +------------- + +- [issue 4] Add ``AppDirs.user_log_dir``. +- [Unix, issue 2, issue 7] appdirs now conforms to `XDG base directory spec + `_. +- [Mac, issue 5] Fix ``site_data_dir()`` on Mac. +- [Mac] Drop use of 'Carbon' module in favour of hardcoded paths; supports + Python3 now. +- [Windows] Append "Cache" to ``user_cache_dir`` on Windows by default. Use + ``opinion=False`` option to disable this. +- Add ``appdirs.AppDirs`` convenience class. Usage: + + >>> dirs = AppDirs("SuperApp", "Acme", version="1.0") + >>> dirs.user_data_dir + '/Users/trentm/Library/Application Support/SuperApp/1.0' + +- [Windows] Cherry-pick Komodo's change to downgrade paths to the Windows short + paths if there are high bit chars. +- [Linux] Change default ``user_cache_dir()`` on Linux to be singular, e.g. + "~/.superapp/cache". +- [Windows] Add ``roaming`` option to ``user_data_dir()`` (for use on Windows only) + and change the default ``user_data_dir`` behaviour to use a *non*-roaming + profile dir (``CSIDL_LOCAL_APPDATA`` instead of ``CSIDL_APPDATA``). Why? Because + a large roaming profile can cause login speed issues. The "only syncs on + logout" behaviour can cause surprises in appdata info. + + +appdirs 1.0.1 (never released) +------------------------------ + +Started this changelog 27 July 2010. Before that this module originated in the +`Komodo `_ product as ``applib.py`` and then +as `applib/location.py +`_ (used by +`PyPM `_ in `ActivePython +`_). This is basically a fork of +applib.py 1.0.1 and applib/location.py 1.0.1. + + + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/appdirs-1.4.4.dist-info/RECORD --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/appdirs-1.4.4.dist-info/RECORD Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,8 @@ +__pycache__/appdirs.cpython-39.pyc,, +appdirs-1.4.4.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +appdirs-1.4.4.dist-info/LICENSE.txt,sha256=Nt200KdFqTqyAyA9cZCBSxuJcn0lTK_0jHp6-71HAAs,1097 +appdirs-1.4.4.dist-info/METADATA,sha256=k5TVfXMNKGHTfp2wm6EJKTuGwGNuoQR5TqQgH8iwG8M,8981 +appdirs-1.4.4.dist-info/RECORD,, +appdirs-1.4.4.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110 +appdirs-1.4.4.dist-info/top_level.txt,sha256=nKncE8CUqZERJ6VuQWL4_bkunSPDNfn7KZqb4Tr5YEM,8 +appdirs.py,sha256=g99s2sXhnvTEm79oj4bWI0Toapc-_SmKKNXvOXHkVic,24720 diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/appdirs-1.4.4.dist-info/WHEEL --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/appdirs-1.4.4.dist-info/WHEEL Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/appdirs-1.4.4.dist-info/top_level.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/appdirs-1.4.4.dist-info/top_level.txt Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,1 @@ +appdirs diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/appdirs.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/appdirs.py Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,608 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# Copyright (c) 2005-2010 ActiveState Software Inc. +# Copyright (c) 2013 Eddy Petrișor + +"""Utilities for determining application-specific dirs. + +See for details and usage. +""" +# Dev Notes: +# - MSDN on where to store app data files: +# http://support.microsoft.com/default.aspx?scid=kb;en-us;310294#XSLTH3194121123120121120120 +# - Mac OS X: http://developer.apple.com/documentation/MacOSX/Conceptual/BPFileSystem/index.html +# - XDG spec for Un*x: http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html + +__version__ = "1.4.4" +__version_info__ = tuple(int(segment) for segment in __version__.split(".")) + + +import sys +import os + +PY3 = sys.version_info[0] == 3 + +if PY3: + unicode = str + +if sys.platform.startswith('java'): + import platform + os_name = platform.java_ver()[3][0] + if os_name.startswith('Windows'): # "Windows XP", "Windows 7", etc. + system = 'win32' + elif os_name.startswith('Mac'): # "Mac OS X", etc. + system = 'darwin' + else: # "Linux", "SunOS", "FreeBSD", etc. + # Setting this to "linux2" is not ideal, but only Windows or Mac + # are actually checked for and the rest of the module expects + # *sys.platform* style strings. + system = 'linux2' +else: + system = sys.platform + + + +def user_data_dir(appname=None, appauthor=None, version=None, roaming=False): + r"""Return full path to the user-specific data dir for this application. + + "appname" is the name of application. + If None, just the system directory is returned. + "appauthor" (only used on Windows) is the name of the + appauthor or distributing body for this application. Typically + it is the owning company name. This falls back to appname. You may + pass False to disable it. + "version" is an optional version path element to append to the + path. You might want to use this if you want multiple versions + of your app to be able to run independently. If used, this + would typically be ".". + Only applied when appname is present. + "roaming" (boolean, default False) can be set True to use the Windows + roaming appdata directory. That means that for users on a Windows + network setup for roaming profiles, this user data will be + sync'd on login. See + + for a discussion of issues. + + Typical user data directories are: + Mac OS X: ~/Library/Application Support/ + Unix: ~/.local/share/ # or in $XDG_DATA_HOME, if defined + Win XP (not roaming): C:\Documents and Settings\\Application Data\\ + Win XP (roaming): C:\Documents and Settings\\Local Settings\Application Data\\ + Win 7 (not roaming): C:\Users\\AppData\Local\\ + Win 7 (roaming): C:\Users\\AppData\Roaming\\ + + For Unix, we follow the XDG spec and support $XDG_DATA_HOME. + That means, by default "~/.local/share/". + """ + if system == "win32": + if appauthor is None: + appauthor = appname + const = roaming and "CSIDL_APPDATA" or "CSIDL_LOCAL_APPDATA" + path = os.path.normpath(_get_win_folder(const)) + if appname: + if appauthor is not False: + path = os.path.join(path, appauthor, appname) + else: + path = os.path.join(path, appname) + elif system == 'darwin': + path = os.path.expanduser('~/Library/Application Support/') + if appname: + path = os.path.join(path, appname) + else: + path = os.getenv('XDG_DATA_HOME', os.path.expanduser("~/.local/share")) + if appname: + path = os.path.join(path, appname) + if appname and version: + path = os.path.join(path, version) + return path + + +def site_data_dir(appname=None, appauthor=None, version=None, multipath=False): + r"""Return full path to the user-shared data dir for this application. + + "appname" is the name of application. + If None, just the system directory is returned. + "appauthor" (only used on Windows) is the name of the + appauthor or distributing body for this application. Typically + it is the owning company name. This falls back to appname. You may + pass False to disable it. + "version" is an optional version path element to append to the + path. You might want to use this if you want multiple versions + of your app to be able to run independently. If used, this + would typically be ".". + Only applied when appname is present. + "multipath" is an optional parameter only applicable to *nix + which indicates that the entire list of data dirs should be + returned. By default, the first item from XDG_DATA_DIRS is + returned, or '/usr/local/share/', + if XDG_DATA_DIRS is not set + + Typical site data directories are: + Mac OS X: /Library/Application Support/ + Unix: /usr/local/share/ or /usr/share/ + Win XP: C:\Documents and Settings\All Users\Application Data\\ + Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.) + Win 7: C:\ProgramData\\ # Hidden, but writeable on Win 7. + + For Unix, this is using the $XDG_DATA_DIRS[0] default. + + WARNING: Do not use this on Windows. See the Vista-Fail note above for why. + """ + if system == "win32": + if appauthor is None: + appauthor = appname + path = os.path.normpath(_get_win_folder("CSIDL_COMMON_APPDATA")) + if appname: + if appauthor is not False: + path = os.path.join(path, appauthor, appname) + else: + path = os.path.join(path, appname) + elif system == 'darwin': + path = os.path.expanduser('/Library/Application Support') + if appname: + path = os.path.join(path, appname) + else: + # XDG default for $XDG_DATA_DIRS + # only first, if multipath is False + path = os.getenv('XDG_DATA_DIRS', + os.pathsep.join(['/usr/local/share', '/usr/share'])) + pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)] + if appname: + if version: + appname = os.path.join(appname, version) + pathlist = [os.sep.join([x, appname]) for x in pathlist] + + if multipath: + path = os.pathsep.join(pathlist) + else: + path = pathlist[0] + return path + + if appname and version: + path = os.path.join(path, version) + return path + + +def user_config_dir(appname=None, appauthor=None, version=None, roaming=False): + r"""Return full path to the user-specific config dir for this application. + + "appname" is the name of application. + If None, just the system directory is returned. + "appauthor" (only used on Windows) is the name of the + appauthor or distributing body for this application. Typically + it is the owning company name. This falls back to appname. You may + pass False to disable it. + "version" is an optional version path element to append to the + path. You might want to use this if you want multiple versions + of your app to be able to run independently. If used, this + would typically be ".". + Only applied when appname is present. + "roaming" (boolean, default False) can be set True to use the Windows + roaming appdata directory. That means that for users on a Windows + network setup for roaming profiles, this user data will be + sync'd on login. See + + for a discussion of issues. + + Typical user config directories are: + Mac OS X: same as user_data_dir + Unix: ~/.config/ # or in $XDG_CONFIG_HOME, if defined + Win *: same as user_data_dir + + For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME. + That means, by default "~/.config/". + """ + if system in ["win32", "darwin"]: + path = user_data_dir(appname, appauthor, None, roaming) + else: + path = os.getenv('XDG_CONFIG_HOME', os.path.expanduser("~/.config")) + if appname: + path = os.path.join(path, appname) + if appname and version: + path = os.path.join(path, version) + return path + + +def site_config_dir(appname=None, appauthor=None, version=None, multipath=False): + r"""Return full path to the user-shared data dir for this application. + + "appname" is the name of application. + If None, just the system directory is returned. + "appauthor" (only used on Windows) is the name of the + appauthor or distributing body for this application. Typically + it is the owning company name. This falls back to appname. You may + pass False to disable it. + "version" is an optional version path element to append to the + path. You might want to use this if you want multiple versions + of your app to be able to run independently. If used, this + would typically be ".". + Only applied when appname is present. + "multipath" is an optional parameter only applicable to *nix + which indicates that the entire list of config dirs should be + returned. By default, the first item from XDG_CONFIG_DIRS is + returned, or '/etc/xdg/', if XDG_CONFIG_DIRS is not set + + Typical site config directories are: + Mac OS X: same as site_data_dir + Unix: /etc/xdg/ or $XDG_CONFIG_DIRS[i]/ for each value in + $XDG_CONFIG_DIRS + Win *: same as site_data_dir + Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.) + + For Unix, this is using the $XDG_CONFIG_DIRS[0] default, if multipath=False + + WARNING: Do not use this on Windows. See the Vista-Fail note above for why. + """ + if system in ["win32", "darwin"]: + path = site_data_dir(appname, appauthor) + if appname and version: + path = os.path.join(path, version) + else: + # XDG default for $XDG_CONFIG_DIRS + # only first, if multipath is False + path = os.getenv('XDG_CONFIG_DIRS', '/etc/xdg') + pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)] + if appname: + if version: + appname = os.path.join(appname, version) + pathlist = [os.sep.join([x, appname]) for x in pathlist] + + if multipath: + path = os.pathsep.join(pathlist) + else: + path = pathlist[0] + return path + + +def user_cache_dir(appname=None, appauthor=None, version=None, opinion=True): + r"""Return full path to the user-specific cache dir for this application. + + "appname" is the name of application. + If None, just the system directory is returned. + "appauthor" (only used on Windows) is the name of the + appauthor or distributing body for this application. Typically + it is the owning company name. This falls back to appname. You may + pass False to disable it. + "version" is an optional version path element to append to the + path. You might want to use this if you want multiple versions + of your app to be able to run independently. If used, this + would typically be ".". + Only applied when appname is present. + "opinion" (boolean) can be False to disable the appending of + "Cache" to the base app data dir for Windows. See + discussion below. + + Typical user cache directories are: + Mac OS X: ~/Library/Caches/ + Unix: ~/.cache/ (XDG default) + Win XP: C:\Documents and Settings\\Local Settings\Application Data\\\Cache + Vista: C:\Users\\AppData\Local\\\Cache + + On Windows the only suggestion in the MSDN docs is that local settings go in + the `CSIDL_LOCAL_APPDATA` directory. This is identical to the non-roaming + app data dir (the default returned by `user_data_dir` above). Apps typically + put cache data somewhere *under* the given dir here. Some examples: + ...\Mozilla\Firefox\Profiles\\Cache + ...\Acme\SuperApp\Cache\1.0 + OPINION: This function appends "Cache" to the `CSIDL_LOCAL_APPDATA` value. + This can be disabled with the `opinion=False` option. + """ + if system == "win32": + if appauthor is None: + appauthor = appname + path = os.path.normpath(_get_win_folder("CSIDL_LOCAL_APPDATA")) + if appname: + if appauthor is not False: + path = os.path.join(path, appauthor, appname) + else: + path = os.path.join(path, appname) + if opinion: + path = os.path.join(path, "Cache") + elif system == 'darwin': + path = os.path.expanduser('~/Library/Caches') + if appname: + path = os.path.join(path, appname) + else: + path = os.getenv('XDG_CACHE_HOME', os.path.expanduser('~/.cache')) + if appname: + path = os.path.join(path, appname) + if appname and version: + path = os.path.join(path, version) + return path + + +def user_state_dir(appname=None, appauthor=None, version=None, roaming=False): + r"""Return full path to the user-specific state dir for this application. + + "appname" is the name of application. + If None, just the system directory is returned. + "appauthor" (only used on Windows) is the name of the + appauthor or distributing body for this application. Typically + it is the owning company name. This falls back to appname. You may + pass False to disable it. + "version" is an optional version path element to append to the + path. You might want to use this if you want multiple versions + of your app to be able to run independently. If used, this + would typically be ".". + Only applied when appname is present. + "roaming" (boolean, default False) can be set True to use the Windows + roaming appdata directory. That means that for users on a Windows + network setup for roaming profiles, this user data will be + sync'd on login. See + + for a discussion of issues. + + Typical user state directories are: + Mac OS X: same as user_data_dir + Unix: ~/.local/state/ # or in $XDG_STATE_HOME, if defined + Win *: same as user_data_dir + + For Unix, we follow this Debian proposal + to extend the XDG spec and support $XDG_STATE_HOME. + + That means, by default "~/.local/state/". + """ + if system in ["win32", "darwin"]: + path = user_data_dir(appname, appauthor, None, roaming) + else: + path = os.getenv('XDG_STATE_HOME', os.path.expanduser("~/.local/state")) + if appname: + path = os.path.join(path, appname) + if appname and version: + path = os.path.join(path, version) + return path + + +def user_log_dir(appname=None, appauthor=None, version=None, opinion=True): + r"""Return full path to the user-specific log dir for this application. + + "appname" is the name of application. + If None, just the system directory is returned. + "appauthor" (only used on Windows) is the name of the + appauthor or distributing body for this application. Typically + it is the owning company name. This falls back to appname. You may + pass False to disable it. + "version" is an optional version path element to append to the + path. You might want to use this if you want multiple versions + of your app to be able to run independently. If used, this + would typically be ".". + Only applied when appname is present. + "opinion" (boolean) can be False to disable the appending of + "Logs" to the base app data dir for Windows, and "log" to the + base cache dir for Unix. See discussion below. + + Typical user log directories are: + Mac OS X: ~/Library/Logs/ + Unix: ~/.cache//log # or under $XDG_CACHE_HOME if defined + Win XP: C:\Documents and Settings\\Local Settings\Application Data\\\Logs + Vista: C:\Users\\AppData\Local\\\Logs + + On Windows the only suggestion in the MSDN docs is that local settings + go in the `CSIDL_LOCAL_APPDATA` directory. (Note: I'm interested in + examples of what some windows apps use for a logs dir.) + + OPINION: This function appends "Logs" to the `CSIDL_LOCAL_APPDATA` + value for Windows and appends "log" to the user cache dir for Unix. + This can be disabled with the `opinion=False` option. + """ + if system == "darwin": + path = os.path.join( + os.path.expanduser('~/Library/Logs'), + appname) + elif system == "win32": + path = user_data_dir(appname, appauthor, version) + version = False + if opinion: + path = os.path.join(path, "Logs") + else: + path = user_cache_dir(appname, appauthor, version) + version = False + if opinion: + path = os.path.join(path, "log") + if appname and version: + path = os.path.join(path, version) + return path + + +class AppDirs(object): + """Convenience wrapper for getting application dirs.""" + def __init__(self, appname=None, appauthor=None, version=None, + roaming=False, multipath=False): + self.appname = appname + self.appauthor = appauthor + self.version = version + self.roaming = roaming + self.multipath = multipath + + @property + def user_data_dir(self): + return user_data_dir(self.appname, self.appauthor, + version=self.version, roaming=self.roaming) + + @property + def site_data_dir(self): + return site_data_dir(self.appname, self.appauthor, + version=self.version, multipath=self.multipath) + + @property + def user_config_dir(self): + return user_config_dir(self.appname, self.appauthor, + version=self.version, roaming=self.roaming) + + @property + def site_config_dir(self): + return site_config_dir(self.appname, self.appauthor, + version=self.version, multipath=self.multipath) + + @property + def user_cache_dir(self): + return user_cache_dir(self.appname, self.appauthor, + version=self.version) + + @property + def user_state_dir(self): + return user_state_dir(self.appname, self.appauthor, + version=self.version) + + @property + def user_log_dir(self): + return user_log_dir(self.appname, self.appauthor, + version=self.version) + + +#---- internal support stuff + +def _get_win_folder_from_registry(csidl_name): + """This is a fallback technique at best. I'm not sure if using the + registry for this guarantees us the correct answer for all CSIDL_* + names. + """ + if PY3: + import winreg as _winreg + else: + import _winreg + + shell_folder_name = { + "CSIDL_APPDATA": "AppData", + "CSIDL_COMMON_APPDATA": "Common AppData", + "CSIDL_LOCAL_APPDATA": "Local AppData", + }[csidl_name] + + key = _winreg.OpenKey( + _winreg.HKEY_CURRENT_USER, + r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders" + ) + dir, type = _winreg.QueryValueEx(key, shell_folder_name) + return dir + + +def _get_win_folder_with_pywin32(csidl_name): + from win32com.shell import shellcon, shell + dir = shell.SHGetFolderPath(0, getattr(shellcon, csidl_name), 0, 0) + # Try to make this a unicode path because SHGetFolderPath does + # not return unicode strings when there is unicode data in the + # path. + try: + dir = unicode(dir) + + # Downgrade to short path name if have highbit chars. See + # . + has_high_char = False + for c in dir: + if ord(c) > 255: + has_high_char = True + break + if has_high_char: + try: + import win32api + dir = win32api.GetShortPathName(dir) + except ImportError: + pass + except UnicodeError: + pass + return dir + + +def _get_win_folder_with_ctypes(csidl_name): + import ctypes + + csidl_const = { + "CSIDL_APPDATA": 26, + "CSIDL_COMMON_APPDATA": 35, + "CSIDL_LOCAL_APPDATA": 28, + }[csidl_name] + + buf = ctypes.create_unicode_buffer(1024) + ctypes.windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf) + + # Downgrade to short path name if have highbit chars. See + # . + has_high_char = False + for c in buf: + if ord(c) > 255: + has_high_char = True + break + if has_high_char: + buf2 = ctypes.create_unicode_buffer(1024) + if ctypes.windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024): + buf = buf2 + + return buf.value + +def _get_win_folder_with_jna(csidl_name): + import array + from com.sun import jna + from com.sun.jna.platform import win32 + + buf_size = win32.WinDef.MAX_PATH * 2 + buf = array.zeros('c', buf_size) + shell = win32.Shell32.INSTANCE + shell.SHGetFolderPath(None, getattr(win32.ShlObj, csidl_name), None, win32.ShlObj.SHGFP_TYPE_CURRENT, buf) + dir = jna.Native.toString(buf.tostring()).rstrip("\0") + + # Downgrade to short path name if have highbit chars. See + # . + has_high_char = False + for c in dir: + if ord(c) > 255: + has_high_char = True + break + if has_high_char: + buf = array.zeros('c', buf_size) + kernel = win32.Kernel32.INSTANCE + if kernel.GetShortPathName(dir, buf, buf_size): + dir = jna.Native.toString(buf.tostring()).rstrip("\0") + + return dir + +if system == "win32": + try: + import win32com.shell + _get_win_folder = _get_win_folder_with_pywin32 + except ImportError: + try: + from ctypes import windll + _get_win_folder = _get_win_folder_with_ctypes + except ImportError: + try: + import com.sun.jna + _get_win_folder = _get_win_folder_with_jna + except ImportError: + _get_win_folder = _get_win_folder_from_registry + + +#---- self test code + +if __name__ == "__main__": + appname = "MyApp" + appauthor = "MyCompany" + + props = ("user_data_dir", + "user_config_dir", + "user_cache_dir", + "user_state_dir", + "user_log_dir", + "site_data_dir", + "site_config_dir") + + print("-- app dirs %s --" % __version__) + + print("-- app dirs (with optional 'version')") + dirs = AppDirs(appname, appauthor, version="1.0") + for prop in props: + print("%s: %s" % (prop, getattr(dirs, prop))) + + print("\n-- app dirs (without optional 'version')") + dirs = AppDirs(appname, appauthor) + for prop in props: + print("%s: %s" % (prop, getattr(dirs, prop))) + + print("\n-- app dirs (without optional 'appauthor')") + dirs = AppDirs(appname) + for prop in props: + print("%s: %s" % (prop, getattr(dirs, prop))) + + print("\n-- app dirs (with disabled 'appauthor')") + dirs = AppDirs(appname, appauthor=False) + for prop in props: + print("%s: %s" % (prop, getattr(dirs, prop))) diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/argcomplete-1.12.2.dist-info/INSTALLER --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/argcomplete-1.12.2.dist-info/INSTALLER Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,1 @@ +pip diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/argcomplete-1.12.2.dist-info/LICENSE.rst --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/argcomplete-1.12.2.dist-info/LICENSE.rst Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,177 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS diff -r 000000000000 -r 4f3585e2f14b env/lib/python3.9/site-packages/argcomplete-1.12.2.dist-info/METADATA --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/env/lib/python3.9/site-packages/argcomplete-1.12.2.dist-info/METADATA Mon Mar 22 18:12:50 2021 +0000 @@ -0,0 +1,413 @@ +Metadata-Version: 2.1 +Name: argcomplete +Version: 1.12.2 +Summary: Bash tab completion for argparse +Home-page: https://github.com/kislyuk/argcomplete +Author: Andrey Kislyuk +Author-email: kislyuk@gmail.com +License: Apache Software License +Project-URL: Documentation, https://kislyuk.github.io/argcomplete +Project-URL: Source Code, https://github.com/kislyuk/argcomplete +Project-URL: Issue Tracker, https://github.com/kislyuk/argcomplete/issues +Project-URL: Change Log, https://github.com/kislyuk/argcomplete/blob/master/Changes.rst +Platform: MacOS X +Platform: Posix +Classifier: Environment :: Console +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Operating System :: MacOS :: MacOS X +Classifier: Operating System :: POSIX +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Development Status :: 5 - Production/Stable +Classifier: Topic :: Software Development +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: System :: Shells +Classifier: Topic :: Terminals +Requires-Dist: importlib-metadata (<4,>=0.23) ; python_version == "2.7" +Requires-Dist: importlib-metadata (<4,>=0.23) ; python_version == "3.5" +Requires-Dist: importlib-metadata (<4,>=0.23) ; python_version == "3.6" +Requires-Dist: importlib-metadata (<4,>=0.23) ; python_version == "3.7" +Provides-Extra: test +Requires-Dist: coverage ; extra == 'test' +Requires-Dist: flake8 ; extra == 'test' +Requires-Dist: pexpect ; extra == 'test' +Requires-Dist: wheel ; extra == 'test' + +argcomplete - Bash tab completion for argparse +============================================== +*Tab complete all the things!* + +Argcomplete provides easy, extensible command line tab completion of arguments for your Python script. + +It makes two assumptions: + +* You're using bash as your shell (limited support for zsh, fish, and tcsh is available) +* You're using `argparse `_ to manage your command line arguments/options + +Argcomplete is particularly useful if your program has lots of options or subparsers, and if your program can +dynamically suggest completions for your argument/option values (for example, if the user is browsing resources over +the network). + +Installation +------------ +:: + + pip install argcomplete + activate-global-python-argcomplete + +See `Activating global completion`_ below for details about the second step (or if it reports an error). + +Refresh your bash environment (start a new shell or ``source /etc/profile``). + +Synopsis +-------- +Python code (e.g. ``my-awesome-script``): + +.. code-block:: python + + #!/usr/bin/env python + # PYTHON_ARGCOMPLETE_OK + import argcomplete, argparse + parser = argparse.ArgumentParser() + ... + argcomplete.autocomplete(parser) + args = parser.parse_args() + ... + +Shellcode (only necessary if global completion is not activated - see `Global completion`_ below), to be put in e.g. ``.bashrc``:: + + eval "$(register-python-argcomplete my-awesome-script)" + +argcomplete.autocomplete(*parser*) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +This method is the entry point to the module. It must be called **after** ArgumentParser construction is complete, but +**before** the ``ArgumentParser.parse_args()`` method is called. The method looks for an environment variable that the +completion hook shellcode sets, and if it's there, collects completions, prints them to the output stream (fd 8 by +default), and exits. Otherwise, it returns to the caller immediately. + +.. admonition:: Side effects + + Argcomplete gets completions by running your program. It intercepts the execution flow at the moment + ``argcomplete.autocomplete()`` is called. After sending completions, it exits using ``exit_method`` (``os._exit`` + by default). This means if your program has any side effects that happen before ``argcomplete`` is called, those + side effects will happen every time the user presses ```` (although anything your program prints to stdout or + stderr will be suppressed). For this reason it's best to construct the argument parser and call + ``argcomplete.autocomplete()`` as early as possible in your execution flow. + +.. admonition:: Performance + + If the program takes a long time to get to the point where ``argcomplete.autocomplete()`` is called, the tab completion + process will feel sluggish, and the user may lose confidence in it. So it's also important to minimize the startup time + of the program up to that point (for example, by deferring initialization or importing of large modules until after + parsing options). + +Specifying completers +--------------------- +You can specify custom completion functions for your options and arguments. Two styles are supported: callable and +readline-style. Callable completers are simpler. They are called with the following keyword arguments: + +* ``prefix``: The prefix text of the last word before the cursor on the command line. + For dynamic completers, this can be used to reduce the work required to generate possible completions. +* ``action``: The ``argparse.Action`` instance that this completer was called for. +* ``parser``: The ``argparse.ArgumentParser`` instance that the action was taken by. +* ``parsed_args``: The result of argument parsing so far (the ``argparse.Namespace`` args object normally returned by + ``ArgumentParser.parse_args()``). + +Completers should return their completions as a list of strings. An example completer for names of environment +variables might look like this: + +.. code-block:: python + + def EnvironCompleter(**kwargs): + return os.environ + +To specify a completer for an argument or option, set the ``completer`` attribute of its associated action. An easy +way to do this at definition time is: + +.. code-block:: python + + from argcomplete.completers import EnvironCompleter + + parser = argparse.ArgumentParser() + parser.add_argument("--env-var1").completer = EnvironCompleter + parser.add_argument("--env-var2").completer = EnvironCompleter + argcomplete.autocomplete(parser) + +If you specify the ``choices`` keyword for an argparse option or argument (and don't specify a completer), it will be +used for completions. + +A completer that is initialized with a set of all possible choices of values for its action might look like this: + +.. code-block:: python + + class ChoicesCompleter(object): + def __init__(self, choices): + self.choices = choices + + def __call__(self, **kwargs): + return self.choices + +The following two ways to specify a static set of choices are equivalent for completion purposes: + +.. code-block:: python + + from argcomplete.completers import ChoicesCompleter + + parser.add_argument("--protocol", choices=('http', 'https', 'ssh', 'rsync', 'wss')) + parser.add_argument("--proto").completer=ChoicesCompleter(('http', 'https', 'ssh', 'rsync', 'wss')) + +Note that if you use the ``choices=`` option, argparse will show +all these choices in the ``--help`` output by default. To prevent this, set +``metavar`` (like ``parser.add_argument("--protocol", metavar="PROTOCOL", +choices=('http', 'https', 'ssh', 'rsync', 'wss'))``). + +The following `script `_ uses +``parsed_args`` and `Requests `_ to query GitHub for publicly known members of an +organization and complete their names, then prints the member description: + +.. code-block:: python + + #!/usr/bin/env python + # PYTHON_ARGCOMPLETE_OK + import argcomplete, argparse, requests, pprint + + def github_org_members(prefix, parsed_args, **kwargs): + resource = "https://api.github.com/orgs/{org}/members".format(org=parsed_args.organization) + return (member['login'] for member in requests.get(resource).json() if member['login'].startswith(prefix)) + + parser = argparse.ArgumentParser() + parser.add_argument("--organization", help="GitHub organization") + parser.add_argument("--member", help="GitHub member").completer = github_org_members + + argcomplete.autocomplete(parser) + args = parser.parse_args() + + pprint.pprint(requests.get("https://api.github.com/users/{m}".format(m=args.member)).json()) + +`Try it `_ like this:: + + ./describe_github_user.py --organization heroku --member + +If you have a useful completer to add to the `completer library +`_, send a pull request! + +Readline-style completers +~~~~~~~~~~~~~~~~~~~~~~~~~ +The readline_ module defines a completer protocol in rlcompleter_. Readline-style completers are also supported by +argcomplete, so you can use the same completer object both in an interactive readline-powered shell and on the bash +command line. For example, you can use the readline-style completer provided by IPython_ to get introspective +completions like you would get in the IPython shell: + +.. _readline: http://docs.python.org/3/library/readline.html +.. _rlcompleter: http://docs.python.org/3/library/rlcompleter.html#completer-objects +.. _IPython: http://ipython.org/ + +.. code-block:: python + + import IPython + parser.add_argument("--python-name").completer = IPython.core.completer.Completer() + +``argcomplete.CompletionFinder.rl_complete`` can also be used to plug in an argparse parser as a readline completer. + +Printing warnings in completers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Normal stdout/stderr output is suspended when argcomplete runs. Sometimes, though, when the user presses ````, it's +appropriate to print information about why completions generation failed. To do this, use ``warn``: + +.. code-block:: python + + from argcomplete import warn + + def AwesomeWebServiceCompleter(prefix, **kwargs): + if login_failed: + warn("Please log in to Awesome Web Service to use autocompletion") + return completions + +Using a custom completion validator +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +By default, argcomplete validates your completions by checking if they start with the prefix given to the completer. You +can override this validation check by supplying the ``validator`` keyword to ``argcomplete.autocomplete()``: + +.. code-block:: python + + def my_validator(current_input, keyword_to_check_against): + # Pass through ALL options even if they don't all start with 'current_input' + return True + + argcomplete.autocomplete(parser, validator=my_validator) + +Global completion +----------------- +In global completion mode, you don't have to register each argcomplete-capable executable separately. Instead, bash +will look for the string **PYTHON_ARGCOMPLETE_OK** in the first 1024 bytes of any executable that it's running +completion for, and if it's found, follow the rest of the argcomplete protocol as described above. + +Additionally, completion is activated for scripts run as ``python